Effect of integrated case-based and problem-based learning on clinical thinking skills of assistant general practitioner trainees: a randomized controlled trial | BMC Medical Education

Design and participants
This study was designed as a prospective, parallel-group, randomised controlled trial (RCT) with a 1:1 allocation ratio. This study was formally reviewed and approved by the Ethics Committee of Zhuzhou 331 Hospital (approval number: ZZS331YYLL-202102-JX1-J1). Participants were recruited from the Assistant General Practitioner (AGP) training program at Zhuzhou 331 Hospital between 1 June 2021 and 1 September 2023. The study timeline encompassed the entire process, from participant enrolment to the completion of data collection. Participants were included if they (1) demonstrated good communication and comprehension skills, (2) fully engaged in the training program without any absences or instances of truancy, and (3) fulfilled all prescribed learning tasks. The exclusion criteria were as follows: (1) lack of assistant physician practice qualification, (2) inability to complete the training curriculum, (3) failure to complete the pre-course and post-course test assessments, and (4) incomplete or deficient responses to the study questionnaires.
Interventions
The CBL-PBL curriculum is structured into seven modules designed to enhance the participants’ clinical reasoning and problem-solving skills. (Supplementary Material 1) The initial module comprises three one-hour lectures that cover the fundamental concepts of CBL-PBL pedagogy, mind mapping techniques, and methods for literature retrieval. This foundational segment equips the participants with a robust theoretical framework and prepares them for subsequent practical exercises. The subsequent six modules consisted of intensive three-hour CBL-PBL sessions. Prior to each session, the instructional team provided trainees with essential materials, including general diagnostic and therapeutic guidelines, five academic papers pertinent to the session’s theme, and a 30-minute clinical procedure video to augment their practical understanding. At the beginning of each session, the instructor offered a brief introduction to the topic and outlined the objectives of the session. The participants then engaged in a simulated patient encounter to identify and articulate key clinical issues, based on the presented case study. In-depth group discussions followed, progressively deepening our understanding of the case. Instructors should encourage trainees to pose questions and evaluate their responses. Subsequently, a representative from each group summarises the key points, shares the group’s findings, and highlights unresolved issues. The session concludes with a comprehensive review by the instructor, addressing the challenges encountered during discussions with expert insights and guidance.
By contrast, traditional lecture-based learning (LBL) follows a predetermined curriculum and training plan. Students are expected to review lectures or relevant texts in accordance with the syllabus or training schedule in order to better comprehend the upcoming material.(Supplementary Material 2) During lectures, the instructor primarily imparts knowledge through verbal presentations, often employing slides and other visual aids to enhance educational experience. Instructor-led didactic sessions were central to this teaching modality [23, 24].
Outcomes
Clinical thinking skills evaluation scale (CTSES)
The CTSES served as the primary assessment tool in this study and was specifically designed to quantitatively evaluate the clinical thinking skills of trainees [2]. It encompasses three fundamental dimensions–critical thinking, systems thinking, and evidence-based thinking–with a total of 24 rating items, six of which pertain to critical thinking skills, 11 to systems thinking skills, and seven to evidence-based thinking skills [2]. Participants rated the items on a five-point Likert scale with a total possible score of 120, which was converted to a percentage for statistical analysis. (Supplementary Material 3) The scale demonstrated a Cronbach’s alpha of 0.962 and a test-retest reliability of 0.861.
Assistant general practitioner knowledge assessment
This study was designed with the understanding that clinical knowledge is fundamental to clinical reasoning skills, resulting in the creation of two examination papers of equal difficulty. (Supplementary Material 4 and 5) The content of the examination papers strictly adheres to the curriculum for assistant general practitioners and utilizes Bloom’s Taxonomy of Educational Objectives [25]. Drawing inspiration from the study by Yan et al. [23], which employed analogous assessment instruments in a related context, each examination papercompriseds 10 case, and each casewass accompanied byfive5 related questions, with a total score of 100 points. The assessments were subjected to preliminary difficulty evaluations by seasoned educational experts, and underwent several adjustments following small-scale pilot testing to ensure scientific rigor and validity.
The othor outcomes included course performance ranking, the number of weekly article readings, and weekly self-study time. Performance score ranking was based on course performance ratings using a Course Performance Rating Scale. This scale, informed by prior research [26], comprehensively addresses four key competency domains: communication skills, teamwork and collaboration, comprehension and reasoning, and knowledge- and information-gathering. Each domain consists of five rating scales ranging from one to five. These scales were accompanied by clear definitions for reference purposes.(Supplementary Material 6).
Data collection
On the day before the start of the first semester of the second academic year, all participants were required to complete the pre-course test and CTSES questionnaires separately. On the last day of the first semester of the second academic year, all participants were once again asked to complete the same difficulty post-course test and questionnaire as the CTSES separately. Additionally, the questionnaire included items on the number of weekly article readings and the weekly self-study times of the participants. After each module’s course ends, teachers grade each student’s course performance according to course performance grading standards. Once all courses were completed, the scores given by all the teachers for each student were collected and averaged for ranking purposes.
Sample size calculation
In this study, we performed a sample size estimation to detect differences in the impact of CBL-PBL teaching methods on the clinical thinking skills of assistant general practitioners in training compared to the LBL group. A preliminary experiment involved 20 participants randomly assigned to either the CBL-PBL or the LBL group, with 10 participants in each group. The CBL-PBL group had a mean score of 70.53 (SD = 10.02), whereas the LBL group had a mean score of 63.45 (SD = 8.61). Using G*Power software (Version 3.1.9.7), we conducted a sample size calculation with a significance level (α) of 0.05 for a two-tailed test and aimed for a statistical power of 1-β = 0.80. The initial calculation suggested that 29 participants per group were required to detect significant differences. Anticipating a 10% dropout rate, we adjusted the sample size to 32 participants per group, for a total of 64. To ensure the robustness of the study, we included 35 participants per group, resulting in a final sample size of 70 participants.
Randomization
In this study, we employed a simple randomization method in which independent randomization staff, uninvolved in recruitment or intervention, sequentially numbered participants based on their registration numbers and assigned new identifiers ranging from 1 to N [23, 24]. Participants with odd identifiers were allocated to the CBL-PBL group, whereas those with even identifiers were assigned to the LBL group. This method did not involve block randomisation or stratified randomisation, nor were there any restrictions such as block sizes. The random allocation sequence was generated by independent randomisation of staff members who did not participate in subsequent recruitment or intervention processes. Recruitment of participants was conducted by the research team and participant allocation was automatically completed based on the aforementioned numbering system. The study adopted a non-triple-blind design, with only the randomisation staff being unaware of the group allocations, while the participants and intervention providers were not blinded. Students were aware of their group assignments and educators were cognizant of the instructional methods they were delivering.
Statistical methods
Fisher’s exact test was used to evaluate the distribution of sex and place of birth between patient groups. For continuous variable comparisons between groups, the independent samples t-test was used if the data showed a normal distribution and homogeneous variance; otherwise, the Mann-Whitney U-test was used. For within-group paired sample comparisons, the paired t-test was used if the data exhibited normal distribution and homogeneous variance. Welch’s t-test was used when variance was not homogeneous. The Wilcoxon signed-rank test was applied in the absence of normal distribution. One-way analysis of variance (ANOVA) was used for clinical thinking score comparisons among three or more groups. Multiple linear regression analyses were conducted to identify the influencing factors. Statistical significance was set at p < 0.05 (and two-tailed) were used to determine statistical significance. All analyses were performed using the IBM SPSS software (version 27.0; IBM Corporation, Armonk, NY, USA).
link