Innovating medical education using a cost effective and scalable VR platform with AI-Driven haptics
Simulation algorithms for VR medical training incorporate haptic feedback, real-time performance assessment, and adaptive learning to improve medical education results. A typical clinical scenario is selected to design a virtual environment in accordance with the training goals. Haptic devices are subsequently attached and calibrated to provide life-like tactile feedback, mimicking tissue resistance and physical contact during real procedures31. This calibration is in place to provide authentic force feedback between trainees, such that motor memory and procedural integrity are reinforced. Table 6 contains all the symbol and their denotation with their unit.
The overall system improves the learning experience by computing dynamic force feedback based on a user’s interactions with the system and employing physics-based models of force feedback to represent real sensations.
$$\:{F}_{h}=\:{k}_{s}{\Delta\:}x+{c}_{d}\frac{d{\Delta\:}x}{dt}{\:}_{10}.$$
(1)
Before the start of the scenario, the system specifies performance indicators, including the duration of the action, correctness of the actions, and correctness of the decisions in action. These measures are written in standard measure, where E represents the procedural accuracy defined by
$$E=\sqrt{\frac{1}{N}{\sum\:}_{i=1}^{N}\:{\left({x}_{i}-{x}_{i}^{\text{*}}\right)}^{2}}$$
(2)
with \(\:{x}_{i}\) representing the trainee’s actions and \(\:{x}_{i}\)* representing ideal actions. At its heart, the algorithm exists in a continuous feedback loop, where it tracks and reacts to learner behavior in real time. The movements, decisions, and response times of the trainees during each step were captured by the system. Using intelligent virtual patients that incorporate AI helps the algorithm generates realistic responses to what the trainees do, creating realistic scenarios that can adapt to varying skill levels32. If mistakes are made, the system alerts the trainee immediately and logs them for further analysis. This feedback process is critical because it helps avoid the repetition of bad habits. The feedback intensity was modulated using the following equation: (3).
$$I(t)={I}_{0}\cdot \text{exp}(-\lambda E(t))\cdot \beta (s)$$
(3)
where IO is the baseline intensity, and is the learning rate. The haptic feedback was then dynamically changed during the simulation in response to the trainee’s movements34. It derives the correct force responses based on advanced physics models of various tissue types and procedural resistance. This allows them to advance their muscle memory and procedural awareness while being placed in a much more realistic and immersive environment for training. After the algorithm was completed, all collected data were processed to generate a detailed performance report. This analysis consisted of quantitative measures (time taken, number of errors, and procedural accuracy) and qualitative evaluations (decision-making and technique). Standardized scoring methods were used to assess performance against the criteria35. The system uses standardized scoring methods to evaluate performance against established benchmarks, as shown in Fig. 3.

Proposed VR medical training simulation with haptic feedback, real-time performance monitoring, and adaptive learning principles.
The algorithm incorporates adaptive learning principles by adjusting the scenario difficulty based on the trainee’s performance. This adaptation is governed by the equation
$${\Phi}={\omega}_{1}{P}_{t}+{\omega}_{2}{P}_{a}+{\omega}_{3}{P}_{e},$$
where different performance aspects are weighted to determine the overall proficiency36. Finally, the algorithm saved all the performance data for longitudinal analysis. This data collection enables the tracking of trainee progress over time and helps identify areas requiring additional focus. The stored information also contributes to the continuous improvement of training scenarios and the calibration of performance metrics. The effectiveness of the algorithm was demonstrated through improved learning outcomes, with studies showing significant improvements in procedural accuracy and decision-making abilities37. The integration of haptic feedback and real-time error detection creates a comprehensive learning environment that bridges the gap between theoretical knowledge and practical skill.
Algorithm for VR-Based Medical Training Simulation.
Algorithm 1
provides a structured workflow for the VR medical training simulation, guiding the process from the scenario setup to performance feedback.
|
Algorithm 1: VR Medical Training Simulation Workflow |
|---|
|
Input: Selected medical scenario S, trainee profile P |
|
Output: Performance report with scores, feedback, and improvement suggestions |
|
1. BEGIN |
|
2. Load Scenario(S) |
|
3. Initialize VR Environment |
|
4. Connect Haptic Devices and Calibrate for Scenario S |
|
5. Display Initial Instructions to Trainee P |
|
6. Set Performance Metrics (completion time, accuracy, decision-making) |
|
7. WHILE Scenario not Completed DO |
|
8. Display Scenario Step to Trainee |
|
9. Capture Trainee Actions |
|
10. Generate Real-Time Feedback |
|
11. IF Error Detected THEN |
|
12. Display Error Alert |
|
13. Log Error in Performance Metrics |
|
14. END IF |
|
15. Provide Haptic Feedback Based on Action |
|
16. Update Scenario Step |
|
17. END WHILE |
|
18. Capture Completion Time |
|
19. Assess Procedural and Decision Accuracy |
|
20. Generate Performance Report |
|
21. Display Feedback to Trainee P |
|
22. Save Performance Data for Further Analysis |
|
23. END |
Requirements gathering and analysis
The initial phase involved a thorough requirement analysis to define the platform’s educational goals and technical capabilities. This phase engages medical educators, trainees, and VR developers to collaboratively outline the essential features, scenarios, and haptic requirements of the VR platform. This collaborative effort included inputs from medical educators, trainees, and VR developers, ensuring that the platform met both pedagogical and technical expectations38. The first phase involved a thorough analysis of the educational and technical requirements of VR platforms. This phase includes:
-
Identifying Core Skills: A set of medical skills \(S=\left\{{s}_{1},{s}_{2},\dots,{s}_{n}\right\}\) is identified, where each \({s}_{i}\) represents a specific medical skill (e.g., surgical techniques, emergency interventions). The selection of skills is based on training demand \(\:D({s}_{i})\) and clinical impact \(I({s}_{i})\), weighted by \(\alpha\) and \(\beta\) :
$${S}^{\text{*}}=\text{a}\text{r}\text{g}\underset{{s}_{i}\in\:S}{\text{m}\text{a}\text{x}}\left[\alpha\:D ({s}_{i})+\beta I ({s}_{i})\right]$$
$$\text{m}\text{i}\text{n}T,\text{m}\text{a}\text{x}A$$
Subject to constraints:
$$T\le {T}_{\text{m}\text{a}\text{x}}, A\ge {A}_{\text{m}\text{i}\text{n}}$$
The overall performance score \(P\) is calculated as:
$$P={\omega}_{1}\frac{{T}_{\text{m}\text{a}\text{x}}-T}{{T}_{\text{m}\text{a}\text{x}}}+{\omega\:}_{2}\frac{A-{A}_{\text{m}\text{i}\text{n}}}{1-{A}_{\text{m}\text{i}\text{n}}}$$
where \({\omega}_{1}\) and \({\omega}_{2}\) are weights for time efficiency and accuracy.
-
Assessing Technical Feasibility: The platform’s technical requirements, such as graphics fidelity \(G\), system response time \(R\), and computational capability \(C\), are evaluated to ensure they meet the minimum thresholds:
$$G\ge{G}_{\text{m}\text{i}\text{n}},R\le {R}_{\text{m}\text{a}\text{x}}, C\ge {C}_{\text{req}}$$
This phase can be mathematically structured into the following key steps:
-
1.
Identifying Core Skills: The first step was to determine the set of medical skills to be simulated. Let the set of core skills be represented as.
$$S=\left\{{s}_{1},{s}_{2},\dots,{s}_{n}\right\}$$
where \(\:{s}_{i}\) represents a specific medical skill, such as surgical techniques, emergency interventions, or routine medical procedures. The prioritization of skills can be quantified based on their training demand \((D({s}_{i}))\) and impact on clinical outcomes \((I({s}_{i}))\). The selection of skills \({S}^{\text{*}}\) to include in the platform can then be expressed as:
$${S}^{\text{*}}=\text{a}\text{r}\text{g}\underset{{s}_{i}\in S}{\text{m}\text{a}\text{x}} \left[\alpha D\left({s}_{i}\right)+\beta I\left({s}_{i}\right)\right]$$
where \(\alpha\) and \(\beta\) are weights assigned to training demand and clinical impact, respectively.
-
2.
Defining Training Objectives: For each skill \({s}_{i}\in {S}^{\text{*}}\), specific and measurable training objectives are established. These objectives can be modelled as optimization problems39. Let \(T\) represent the time taken to complete a task, and \(A\) represent the accuracy of procedural steps. The training objective function can be expressed as
Objective: \(\text{m}\text{i}\text{n}T, \text{m}\text{a}\text{x}A\)
subject to constraints:
$$T\le {T}_{\text{m}\text{a}\text{x}}, A\ge {A}_{\text{m}\text{i}\text{n}}$$
where \({T}_{\text{m}\text{a}\text{x}}\) is the maximum allowable time, and \({A}_{\text{m}\text{i}\text{n}}\) is the minimum required accuracy.
The overall performance of a trainee can be represented by a composite score \(P\), defined as:
$$P={\omega}_{1}\frac{{T}_{\text{m}\text{a}\text{x}}-T}{{T}_{\text{m}\text{a}\text{x}}}+{\omega}_{2}\frac{A-{A}_{\text{m}\text{i}\text{n}}}{1-{A}_{\text{m}\text{i}\text{n}}}$$
where \({\omega}_{1}\) and \({\omega}_{2}\) are the weights assigned to time efficiency and accuracy, respectively.
-
3.
Assessing Technical Feasibility: The technical feasibility of the VR platform was analyzed by evaluating its hardware, software, and haptic requirements. Key metrics include graphics fidelity \((G)\), system response time \((R)\), and computational capability \((C)\).
Graphics Fidelity: The fidelity of the graphics must meet or exceed a threshold \({G}_{\text{m}\text{i}\text{n}}\) to ensure realistic simulations. This can be expressed as
$$G\ge {G}_{\text{m}\text{i}\text{n}}$$
System response time
The response time \(R\) should be minimized to maintain interactivity and realism, subject to a maximum allowable response time\({R}_{\text{max}}\)
$$R\le {R}_{\text{m}\text{a}\text{x}}$$
Computational capability
The computational capability of the system, represented by \(C\), must be sufficient to handle the required graphics and interaction complexity40. This is given by.
$$C\ge {C}_{\text{req}}$$
where \({C}_{\text{r}\text{e}\text{q}}\) is the required computational power, which can be a function of \(G,R\), and the number of concurrent users \(U\) :
$${C}_{\text{r}\text{e}\text{q}}=f(G,R,U)$$
Feasibility condition
The overall feasibility of the platform was ensured when all the constraints were satisfied.
$$G\ge {G}_{\text{m}\text{i}\text{n}}, R\le {R}_{\text{m}\text{a}\text{x}}, C\ge {C}_{\text{r}\text{e}\text{q}}$$
By mathematically defining these steps, the requirements gathering and analysis phase provides a structured approach to ensure that the VR platform aligns with the educational goals and technical specifications.
VR environment development
The VR environment construction stage involves establishing a realistic and immersive clinical environment in the virtual world. This includes 3D modelling, scripting scenes, and ensuring that interactive elements function as expected.
3D modelling and scene construction
Realistic 3D models of medical devices and patients’ bodies should be generated to make the learning environment immersive. In life-like models, anatomically accurate models are built using tools such as Unity and Unreal Engine.
-
a.
3D Model Representation: The 3D models were constructed using triangular mesh. Each triangle in the mesh is represented by three vertices in the 3D space as follows:
$${T}_{i}=\left\{{v}_{1},{v}_{2},{v}_{3}\right\}, {v}_{j}=\left({x}_{j},{y}_{j},{z}_{j}\right),j\in \{\text{1,2},3\}$$
where \(({x}_{j},{y}_{j},{z}_{j})\) are the coordinates of vertex \({v}_{j}\) in 3D space. The collection of triangles defines the surfaces of a 3D object.
-
b.
Transformation Matrices: Transformation matrices were applied to accurately position and orient the 3D models as follows:
$$\mathbf{T}=\left[\begin{array}{llll}1&\:0&\:0&\:{t}_{x}\\\:0&\:1&\:0&\:{t}_{y}\\\:0&\:0&\:1&\:{t}_{z}\\\:0&\:0&\:0&\:1\end{array}\right]$$
Rotation (around the z-axis).
$${\mathbf{R}}_{z}\left(\theta\:\right)=\left[\begin{array}{cccc}\text{c}\text{o}\text{s}\theta\:&\:-\text{s}\text{i}\text{n}\theta\:&\:0&\:0\\\:\text{s}\text{i}\text{n}\theta\:&\:\text{c}\text{o}\text{s}\theta\:&\:0&\:0\\\:0&\:0&\:1&\:0\\\:0&\:0&\:0&\:1\end{array}\right]$$
$$\mathbf{S}=\left[\begin{array}{cccc}{s}_{x}&\:0&\:0&\:0\\\:0&\:{s}_{y}&\:0&\:0\\\:0&\:0&\:{s}_{z}&\:0\\\:0&\:0&\:0&\:1\end{array}\right]$$
The final position of a vertex \(v\) after transformations is given by:
$${v}^{{\prime}}=\mathbf{T}\cdot{\mathbf{R}}_{z}(\theta) \cdot \mathbf{S}\cdot v$$
-
c.
Scene Construction: Objects in a clinical environment must be placed to avoid overlap and ensure spatial realism. This involves solving the following optimization problems for object placement:
$$\text{m}\text{i}\text{n}\sum_{i=1}^{N}\:{\parallel{p}_{i}-{q}_{i}\parallel}^{2}, \, \text{subject to}\,{d}_{ij}\ge\:{d}_{\text{m}\text{i}\text{n}},\forall\:i \ne j$$
where \({p}_{i}\) and \({q}_{i}\) are the current and desired positions of objects, \({d}_{ij}\) is the distance between objects \(i\) and \(j\), and \({d}_{\text{m}\text{i}\text{n}}\) is the minimum allowed distance to prevent overlap.
-
d.
Interactive Components: Interactive elements, such as medical instruments and lighting, are governed by user input. The interaction was modelled using the following physical equations:
-
Instrument Motion: Modelled using kinematics, if an instrument is moved, its position \(x (t)\) is updated as:
$$x\left(t\right)={x}_{0}+v\cdot t+\frac{1}{2}a\cdot {t}^{2}$$
where \({x}_{0}\) is the initial position, \(v\) is velocity, and \(a\) is acceleration.
$$I={I}_{a}+{I}_{d}(\mathbf{L}\cdot \mathbf{N})+{I}_{s}(\mathbf{R}\cdot \mathbf{V}{)}^{n}$$
where:
-
\({I}_{a}\) : Ambient light intensity
-
\({I}_{d}\) : Diffuse reflection intensity, dependent on the angle between the light direction \(\mathbf{L}\) and the surface normal \(\mathbf{N}\)
-
\({I}_{s}\) : Specular reflection intensity, based on the angle between the reflected light \(\mathbf{R}\) and the viewer’s direction \(\mathbf{V}\)
-
\(n\) : Shininess coefficient
Model Verification.
Verification involves comparing the virtual models with real-world anatomical data.
$$\Delta=\frac{\parallel{\mathbf{M}}_{\text{virtual}}-{\mathbf{M}}_{\text{real}}\parallel}{\parallel{\mathbf{M}}_{\text{real}}\parallel}\times100$$
where \({\mathbf{M}}_{\text{virtual}}\) and \({\mathbf{M}}_{\text{real}}\) are the vertices of the virtual and real models, respectively. \({\Delta}\) represents the percentage error, which must be minimized. By leveraging these mathematical principles, the VR environment achieves a balance between precision and interactivity, thereby delivering immersive and functional clinical simulations.
Scenario scripting
VR-based training simulators in the medical field are meticulously designed for an expected variety of clinical circumstances, from normal procedures to complex surgical operations. These scenarios are scripted to be interactive and provide points of branching decisions to evolve similarly to trends in real-world medical decision-making.
-
1.
Decision Points: Users make key decisions, such as incision location or procedure type, that affect the scenario’s outcome, encouraging critical thinking and clinical decision-making.
-
2.
Immediate feedback: The simulator used real-time scoring of anaesthesiology skills to alert students to procedural errors and best practices. The key for linguistic application of this 3D model is to provide a 3D model that represents the anatomical structures and physical environments realistically. 3D modelling in general uses mathematical transformation to project 3D world coordinates into 2D display coordinates. For preserving anatomical accuracy and spatial consistency, the transformation matrices are employed to translate, rotate, and scale objects within the VR environment. This enables live/real time representation of the interactions required for procedural training and orientation.
-
3.
Transformation Matrix: A 3D points in global system are being transformed by means of a matrix. The transformation equation for a point is as follows:
$$\left[\begin{array}{c}{x}^{{\prime\:}}\\\:{y}^{{\prime\:}}\\\:{z}^{{\prime\:}}\\\:1\end{array}\right]=\left[\begin{array}{ll}R&\:T\\\:0&\:1\end{array}\right]\left[\begin{array}{c}x\\\:y\\\:z\\\:1\end{array}\right]$$
where:
-
\(R\) is a \(3 \times 3\) rotation matrix that rotates the point,
-
\(T\) is a \(3 \times 1\) translation vector that shifts the point in space,
-
\(({x}^{{\prime}},{y}^{{\prime}},{z}^{{\prime}})\) is the new coordinate of the point after transformation.
Projection transformation
Projection transformation is used to display a 3D object on a 2D monitor. In perspective projection, the depth of an object affects its size and contributes to realism. The perspective projection transform may be expressed as.
$$\left[\begin{array}{c}{x}^{{\prime\:}}\\\:{y}^{{\prime\:}}\\\:{z}^{{\prime\:}}\\\:w\end{array}\right]=\left[\begin{array}{cccc}\frac{L}{d}&\:0&\:0&\:0\\\:0&\:\frac{f}{d}&\:0&\:0\\\:0&\:0&\:\frac{f+n}{f-n}&\:\frac{-2fn}{f-n}\\\:0&\:0&\:1&\:0\end{array}\right]\left[\begin{array}{l}x\\\:y\\\:z\\\:1\end{array}\right]$$
where:
-
\(f\) is the focal length of the virtual camera,
-
\(d\) is the distance from the camera to the projection plane,
-
\(n\) and \(f\) represent near and far clipping planes, defining the viewable depth range.
This transformation ensures that objects closer to the viewer appear larger, thereby enhancing the realism of the 3D environment.
Haptic feedback integration
Haptic interaction is particularly important in VR-based medical training sessions, where exact motor skills and tactile sensitivity are necessary. In the second stage of the system construction, VR simulations are combined with state-of-the-art haptic systems (SensAble Omni or Haptic Master) capable of emulating touch and force feedback on a trainee to help them acquire real-life hand-eye relationships and fine motor capabilities.
To create a realistic haptic interaction, the approach utilizes tactile dimensional modulation, where a modulation texture is assigned to the virtual object so that users can feel the depth, resistance, and surface variations. These sensations correlate with the object velocity, applied force, and motion type, resulting in more subtle tactile feedback sensations.
Sensory Mapping is utilized to map certain procedural decisions (e.g., suturing and palpation) to appropriate haptic feedback, adding realism.
Synchronization ensures that the tactual feedback closely follows visual and auditory stimuli and maintains modality correspondence for simulation fidelity.
Through force vector modelling, the software considers the amount of resistance simulated components provide to a touch or tap. This immersion feature enables trainees to feel and react to the simulated tissue and instrument properties, in addition to the tactile response, and provides realistic procedural training and enhanced retention of skills through multisensory experience.
Force feedback calculation
The haptic force feedback system computes the force feedback according to the interaction between the virtual instrument (e.g., scalpel) and the virtual anatomical structure (e.g., tissue). Force feedback, \({\mathbf{F}}_{t}\) is often modelled using Hooke’s Law, given by
$$\mathbf{F}=-k\cdot \mathbf{x}$$
where:
-
\(k\) is the stiffness coefficient representing the elasticity of the virtual tissue,
-
\(\mathbf{X}\) where denotes the displacement vector of the virtual tool with respect to the tissue surface.
This force calculation enables the haptic device to apply pressure that simulates the resistance of different tissues, allowing trainees to practice procedures such as incision and suturing with realistic and accurate feedback.
-
1.
Damping Effect: To prevent oscillations and ensure stable feedback, a damping term was added to the force equation. The damping force, \({\mathbf{F}}_{\text{d}}\), is given by:
$${\mathbf{F}}_{\text{d}}=-c\cdot \mathbf{v}$$
where:
-
\(c\) is the damping coefficient,
-
\(\mathbf{V}\) where is the velocity of the haptic device, and
The total force applied to the haptic device is then
$${\mathbf{F}}_{\text{total}}=\mathbf{F}+{\mathbf{F}}_{\text{d}}=-k\cdot \mathbf{x}-c\cdot \mathbf{v}$$
This ensures smooth and realistic feedback by controlling the stiffness and damping characteristics, which are crucial for simulating interactions with various types of tissue.
Haptic feedback and force calculation
Haptic feedback is essential in simulators, as it is a requirement for reproducing sensations, which are important for procedural training. These models capitalize on the use of force vectors for simulating the sensation of touch and forces produced when interacting with virtual objects and aid a trainee in building a realistic concept about physical touching.
-
a.
Haptic feedback calculation: Force feedback was calculated based on the interaction between the virtual instrument (e.g., scalpel) and anatomical structures (e.g., tissue). Force feedback, \({\mathbf{F}}_{t}\) is often modelled using Hooke’s Law, given by:
$$\mathbf{F}=-k\cdot \mathbf{x}$$
where:
-
\(k\) is the stiffness coefficient representing the elasticity of the virtual tissue,
-
\(\mathbf{X}\) where denotes the displacement vector of the virtual tool with respect to the tissue surface.
This force calculation enables the haptic device to apply pressure that simulates the resistance of different tissues, allowing trainees to practice procedures such as incision and suturing with realistic and accurate feedback.
-
b.
Damping Effect: To prevent oscillations and ensure stable feedback, a damping term was added to the force equation. The damping force, \({\mathbf{F}}_{\text{d}}\), is given by:
$${\mathbf{F}}_{\text{d}}=-c\cdot \mathbf{v}$$
where:
-
\(c\) is the damping coefficient,
-
\(\mathbf{V}\) where is the velocity of the haptic device, and
The total force applied to the haptic device is then
$${\mathbf{F}}_{\text{total}}=\mathbf{F}+{\mathbf{F}}_{\text{d}}=-k\cdot \mathbf{x}-c\cdot \mathbf{v}$$
This ensures smooth and realistic feedback by controlling the stiffness and damping characteristics, which are crucial for simulating interactions with various types of tissue.
Trainee performance tracking and assessment
To make the VR training platform efficient, the trainee progress should be tracked in-scenario and with post-scenario surveys. This information not only verifies the system is effective but also offers specific feedback and identifies areas to target.
-
1.
In-Scenario Metrics: Real-time measurement of certain important metrics of trainees’ ‘effectiveness’ such as time to task completion, accuracy across a task and decision-making ability.
-
2.
Time to completion: Total time at each task from start to finish was recorded and compared with the previously established proficiency standards to measure efficiency.
-
3.
Procedural Correctness After the trainee’s procedural completion, this metric evaluates how closely the trainee adheres to the correct order of procedural steps, to assess if incorrect steps are taken or if steps are missed or out of order.
-
4.
Correct decision: decision is compared with best medical practice to rate decision quality and clinical reasoning at each decision point.
-
5.
Post-Scenario Feedback: A report with performance data, scores, errors, and tailored feedback was given after the performance on a scenario was completed. It also helps develop skills, reduce knowledge decay, and leave a trace of progress over time.
The VR system adopted a robust evaluation mechanism (VRM Trainee Assessment Metrics) and procedural accuracy, time completion and number of errors were computed into allometric mathematical models. Such models predict learner performance by a priori criteria to make judgments about learner competence and support adaptive training interventions.
Procedural accuracy
Procedural correctness was assessed by measuring the deviation from a standard procedure. Consider any step of a procedure that has a corresponding action for the target and the trainee. where 𝑃 is the accuracy of procedure.
$$PA=1-\frac{\sum \mid A-{A}_{t} \mid}{n}$$
where:
-
\(n\) is the total number of procedural steps,
-
\(\mid A-{A}_{t} \mid\) where is the absolute error at each step.
-
a.
Completion Time: Completion time, \({T}_{c}\), is the duration taken by a trainee to complete the procedure compared to the expected time \({T}_{e}\). The efficiency ratio, \(ER\), is defined as:
$$ER=\frac{{T}_{e}}{{T}_{c}}$$
An \(ER\) closer to 1 indicates timely completion, while values below 1 suggest inefficiency.
-
b.
Error Rate: Error rate, \({E}_{rr}\) is calculated by dividing the number of errors \(E\) by the total number of actions \(A\) :
This metric allows instructors to gauge areas that need improvement and provides objective insights into trainee proficiency.
-
c.
Error Rate: Error rate, \({E}_{r}\), is calculated by dividing the number of errors \(E\) by the total number of actions \(A\) :
This metric allows instructors to gauge areas that need improvement and provides objective insights into trainee proficiency.
-
d.
Procedural Accuracy: Procedural accuracy was quantified by tracking deviations from the standard procedure. Each step of a procedure has an associated target action \({A}_{t}\) and the trainee’s action \(A\). Procedural accuracy, \(PA\), can be calculated as:
$$PA=1-\frac{\sum \mid A-{A}_{t} \mid}{n}$$
where:
-
\(n\) is the total number of procedural steps,
-
\(\mid A-{A}_{t} \mid\) where is the absolute error at each step.
-
e.
Completion Time: Completion time, \({T}_{c,}\) is the duration taken by a trainee to complete the procedure compared to the expected time \({T}_{e}\). The efficiency ratio, \(ER\), is defined as:
$$ER=\frac{{T}_{e}}{{T}_{c}}$$
An \(ER\) closer to 1 indicates timely completion, while values below 1 suggest inefficiency.
-
f.
Error Rate: Error rate, \({E}_{r}\), is calculated by dividing the number of errors \(E\) by the total number of actions \(A\) :
This metric allows instructors to gauge areas that need improvement and provides objective insights into trainee proficiency.
Adaptation of training scenarios based on performance
For adaptive training, the scenarios can be flexible and tailored to a trainee’s performance scores, personalizing the training. Adaptation is done through feedback loops, where the user performance is evaluated and the task difficulty level is updated accordingly.
Adaptive difficulty model
Let \({D}_{t}\) represent the target difficulty level of the scenario, \(P\) the trainee’s performance score, and \(D\) the initial difficulty level. An adaptive model adjusts difficulty as follows
$${D}_{\text{new}}=D+\alpha (P-{P}_{t})$$
where:
-
\(\alpha\) is a learning rate parameter controlling the adjustment pace,
-
\({P}_{t}\) where denotes the target-performance score.
If the trainee’s performance exceeds \({P}_{t}\) the difficulty increases’: otherwise, it decreases, promoting gradual skill progression.
Detailed System architecture development process
In this section, we describe the constructed VR-driven medical-training platform in details and provide with detailed and in-depth information on its system architecture, development, and the technical features and configurations of the hardware and software deployed in the system. The tool was evaluated by data analysis and a usability test.
Hypothesis Testing: A t-test was used to compare the performances of the VR-trained and traditionally trained groups.
$$t={K}_{i}\int\: e (t)dt+{K}_{d}\frac{de (t)}{dt}$$
where \({\bar{X}}_{VR}\) and \({\bar{X}}_{T}\) are the mean scores of the VR and traditional groups, respectively.
where \(e (t)\) is the error at time \(t\), and \({K}_{p,}{K}_{i}\), and \({K}_{d}\) are the proportional, integral, and derivative gains.
Development process
A VR medical training platform was developed using a phased approach. It Needs assessment Consultations with practicing clinicians were used to identify and rank key procedures as per clinical importance and training required.
-
1.
3D Modelling and Scene Composition: High-res anatomical models and clinical environments were generated from Blender/Maya and validated against real data.
-
2.
Haptic Feedback Implementation: Haptic device calibration was based on Hooke’s law and damping models to mimic realistic tissue resistance and tactile interaction respectively.
-
3.
Scripting for scenarios: The interactive medical scenarios were created with decision points and point-of-care feedback to mimic the actual clinical environment.
-
4.
Testing and Validation: The platform was subjected to usability, validity and performance testing with feedback from trainees and teachers used to further develop the system.
link
