Knowledge acquisition and student perceptions of three teaching methods: a randomized trial of live, flipped, and interactive flipped classrooms | BMC Medical Education

0
Knowledge acquisition and student perceptions of three teaching methods: a randomized trial of live, flipped, and interactive flipped classrooms | BMC Medical Education

Trial design

This study employed a parallel group, multi-arm, randomized experiment with an allocation ratio of 1:1:1. among three teaching methods: live lecture, flipped video lecture, and interactive flipped lecture. The study design remained unchanged after commencement.

Ethical approval

Ethical approval was granted by the Research Ethics Committee at the Deanship of Graduate Studies and Research (Approval No: D-F-H-18-Oct), and all participants provided informed consent.

Participants and eligibility criteria

Eligible participants were fourth-year dental students enrolled in “Clinical Orthodontics I.” Students from other academic years or those who had previously withdrawn from the course were excluded.

All participants in the study provided their consent before starting.

Randomization

Students were categorized based on their GPA into four categories (following the university GPA grading system): Excellent (GPA 3.6-4), Very Good (GPA 3.0-3.59), Good (2.5–2.99), and Satisfactory (2.0-2.49). Each student’s name, along with their GPA classification, was written on a paper slip, folded, and placed into one of four corresponding GPA-specific boxes labeled ‘Excellent,’ ‘Very Good,’ ‘Good,’ and ‘Satisfactory.’

Once all names were assigned, the papers were unfolded and listed under their respective study models. This stratified randomization resulted in balanced groups of 52 students in each intervention. A total of 156 students participated in the study. The study design and the flow of participants throughout the research process are illustrated in Fig. 1.

Fig. 1
figure 1

CONSORT diagram showing the flow of participants throughout the study. The diagram illustrates the process of randomization, allocation to the three intervention groups (Live Lecture, Flipped Video Lecture, and Flipped Interactive Lecture), and participant flow during pre- and post-test assessments.

Intervention

This study was conducted during the second half of the semester to avoid conflicts with midterm and final exams. The topic ‘Treatment Planning in Orthodontics I’ was selected because it combines both fundamental educational concepts and higher-level critical thinking skills within the curriculum. The lecturer (AS) used the Clinical Orthodontic I Theory course syllabus to produce the lecture presentation, which serves as the sole source of material for the three instructional techniques: live lectures, flipped video recorded lectures, and flipped interactive lectures. The instructor (AS) filmed the presentation on screen while also recording the audio for the lecture’s content. The speaker followed the presentation as a guide to ensure that the information was presented consistently, just as he would if it were a live lecture. The video was edited via Ulead VideoStudio 2018 (video editing software developed by Ulead System Company; and an interactive video for the flipped interactive lecture group was created via Mindstamp (video interactive platform developed by Mindstamp Company; The recorded lectures did not include Closed Captioning; students engaged with the material through audio-visual content only.

The Flipped Video Lecture model allowed students to watch a pre-recorded lecture at their own pace, with the ability to pause, rewind, and review the content as needed. This intervention has no embedded activities. Conversely, the Interactive Flipped Lecture included engagement elements, such as in-video quizzes and interactive prompts, spaced every 7–10 min to enhance student engagement.

Assignment and assessment process

Students were randomly assigned to one of the three intervention groups and received detailed instructions on their designated lecture format and attendance requirements. A formal announcement was made a day before the study, informing students of their group allocation and session details.

The Live Lecture group attended a 35-minute in-person lecture in Hall 1 of the College of Dentistry, whereas students in the Flipped Video Lecture and Interactive Flipped Lecture groups were required to watch their assigned lecture within two days before attending the mandatory in-person discussion session. Students accessed them via the university’s online portal, where completion was monitored to ensure compliance.

After completing their respective lectures, all students attended a mandatory single, standardized Q&A session in Hall 1, moderated by the same instructor to maintain consistency across groups. To ensure assessment integrity, students were explicitly instructed not to consult external resources while taking the quizzes. The post-intervention quiz was locked and administered in a supervised, in-person setting at the university, immediately following the Q&A session. This prevented students from collaborating or sharing answers between groups.

After completing the 10-item multiple-choice post-quiz, students were provided with their designated questionnaire, adapted from validated instruments, to assess their perceptions of their assigned teaching method. The surveys were distributed via Microsoft Forms to facilitate data collection.

Importantly, quiz scores were collected solely for research purposes and were not included in students’ official course grades.

Live lecture group

Students in this group attended a 35-minute live lecture delivered in Hall 1 of the College of Dentistry. The session allowed for direct interaction with the instructor, enabling real-time questioning and clarification of concepts.

Before the lecture, students completed a 10-item multiple-choice pre-intervention test. Following the lecture, students participated in the standardized Q&A session, after which they completed the post-intervention quiz and perception questionnaire (Supplementary File 1).

Flipped video recorded lecture group

Students in this group were given access to a 35-minute video-recorded lecture, which they could watch on their devices at home at their preferred pace. The lecture did not contain embedded interactive elements.

  • Pre-intervention test: Completed before watching the video.

  • Post-intervention test: Administered after the in-person Q&A session in a supervised environment, ensuring no access to external resources.

  • Questionnaire: Distributed after the post-test to assess student perceptions (Supplementary File 2).

Flipped interactive lecture group

Students in this group received a 35-minute interactive video, created using Mindstamp, which included embedded quizzes, interactive questions, and prompts every 7–10 min to enhance engagement.

  • Pre-test: Completed before starting the video.

  • Post-test: Taken in-person immediately after the Q&A session, under the same conditions as other groups.

  • Questionnaire: Administered after the post-test to evaluate student perceptions (Supplementary File).

Blinding

The quiz questions were developed by one of the authors (HA), who was blinded to the specifics of each teaching method. HA used only the lecture’s learning objectives to design the questions, ensuring that the lecturer (AS) had no prior knowledge of the quiz content to avoid giving students any hints throughout the presentation. The teaching method could not be concealed from the students.

Outcome measures

Academic Performance: Knowledge acquisition was measured using pre- and post-test scores on multiple-choice quizzes specifically designed for each intervention group to objectively assess academic performance improvements. Ten questions in the form of MCQs on Microsoft forms were given to the students before and after the intervention by one of the researchers (HA), who used the presentation as a reference.

Student Perceptions: The survey questionnaire assessing student perceptions of each teaching method was adapted from a validated instrument used by Shqaidef et al. (2020) [10] in a similar dental education context. Minor modifications were made (e.g., removal of some questions) to fit the study’s scope. The instrument was pilot-tested with a small group (n = 12) to confirm suitability. Responses were collected on a 5-point Likert scale (strongly agree, agree, neither agree nor disagree, disagree and strongly disagree) and analyzed to gauge student engagement and satisfaction. The students were asked to answer the questionnaire via a link (depending on which study group they belonged to) using Microsoft forms for easy analysis and information gathering. The final questionnaires for each group are provided in Supplementary Files 1–3.

Grade Point Average (GPA), a cumulative measure of a student’s academic performance, was calculated on a scale from 0 to 4. GPA scores. The GPA scores were based on the average of students’ grades from the last three years of study and were used as an indicator of overall academic achievement and were factored into the analysis to evaluate the correlation between prior academic performance and the effectiveness of each teaching method.

Sample size calculation

A power analysis was conducted using a one-way ANOVA F-test to determine the appropriate sample size for this study. The analysis was based on an expected effect size (Cohen’s d) of 0.4, which is considered a moderate effect size in educational research. The standard deviation of the variation in the means was set at 0.82, while the common standard deviation within a group was assumed to be 1.00 [16, 17]. The desired statistical power was set at 0.95 with a significance level (alpha) of 0.05, which is standard for detecting meaningful differences between groups. Based on these parameters, the power analysis indicated that a minimum of 96 participants (32 per group) would be required to detect a statistically significant difference between the teaching methods.

However, to improve statistical power, account for potential attrition, and ensure adequate representation across all GPA categories—including the relatively smaller “Satisfactory” group—we invited all 196 students enrolled in the course to participate, but attendance was voluntary as participation could not be mandated for ethical reasons. A total of 156 students attended voluntarily and were randomized into three intervention groups (52 students per group). This adjustment enhanced the study’s robustness by capturing performance variations across the full academic spectrum. This increase in sample size did not negatively impact statistical validity; instead, it improved the reliability of the findings by reducing the risk of underpowering the analysis.

Due to ethical considerations and academic policy, a true control group was not included in the current study, as withholding instruction on core curriculum material would not be permitted.

Data collection and statistical analysis

Data on students’ pre- and post-intervention quiz scores were collected through Microsoft Forms for automated scoring. Perceptions of each teaching method were also gathered via the questionnaire distributed through Microsoft Forms.

Statistical analyses included paired t-tests to assess within-group pre- and post-intervention differences and one-way ANOVA to compare scores across the three groups. Pearson correlation analysis evaluated the association between GPA and post-test performance. p-values were set at 0.05 for significance.

link

Leave a Reply

Your email address will not be published. Required fields are marked *