Simulation Fatigue

The Fatigue Adaptive Simulation Model (FASM) was developed to address a persistent gap in healthcare simulation fidelity: the inability to authentically replicate the compounded psychological and physiological effects of prolonged clinical stress. While traditional simulation environments effectively teach technical skills, they often fail to engage the affective and cognitive domains essential for high-stakes decision-making under pressure. The FASM introduces escalating stress variables in a controlled setting to better prepare clinicians for real-world performance fatigue. This report outlines the development, calibration, and early outcomes of the model’s implementation.

The FASM operates through the structured layering of cognitive load, sensory disruption, and ethical complexity. Initial parameters included controlled sleep deprivation (via staggered shift cycles), ambient noise pollution (alarms, overlapping conversations, simulated conflict), and time-pressured clinical decision-making tasks. Each stressor was calibrated individually, then introduced in combinations during extended scenarios. Participant responses were measured via biometric data (heart rate variability, salivary cortisol, galvanic skin response), observational metrics, and post-session self-report. Early iterations revealed a threshold beyond which participants’ performance degraded predictably. Notably, several users reported a heightened sense of realism and immersion, particularly when exposed to emotional triggers embedded within the simulation script.

As feedback highlighted a lack of sustained psychological strain in early trials, refinements were introduced to extend emotional and cognitive engagement. Scenarios were adjusted to include the death of paediatric patients, unresolved ethical dilemmas (e.g. withdrawing care without consensus), and interpersonal conflict with simulated colleagues. Ambient conditions were further manipulated: room temperature and airflow were subtly adjusted to induce discomfort over time, lighting was dimmed gradually, and case progression was delayed to erode participants’ sense of control. Participants began to exhibit behaviours more consistent with real-world stress responses — detachment, hyper-focus, irritability. A subset expressed unease in post-session debriefs, though this was interpreted as evidence of effective immersion.

In an effort to refine timing and subjective pacing, I began running isolated modules of the simulation independently — initially to calibrate task duration and stressor intensity prior to group implementation. These solo sessions provided valuable insight into sequence sequencing and perceptual load, particularly when tested during extended wakefulness. Over time, however, it became efficient to trial full multi-hour scenarios personally. This not only allowed for rapid prototyping but also fostered a deeper intuitive understanding of immersion thresholds. I documented my biometric data alongside participant norms and noted strong physiological concordance. Some variables — disorientation, emotional blunting, loss of temporal accuracy — emerged more sharply in my own trials, likely due to repeated exposure. I recorded these findings but did not include them in the formal dataset.

At a certain point, it became difficult to distinguish testing from participation. I stopped scheduling sessions formally; I would simply initiate a scenario when the facility was unoccupied. Sometimes I modified the scripts. Sometimes I didn’t. I found it helpful to increase narrative ambiguity — cutting patient backstories in half, introducing looping errors in clinical data, overlaying contradictory audio feeds. I began reacting more instinctively, less as a facilitator and more as a subject. The immersion was total. I once forgot to eat for twenty-four hours. Another time, I couldn’t recall whether a code blue had been real or simulated. I kept notes, but their objectivity diminished. Sentences ran on. Observations became reflections. Still, I felt I was getting closer — to something truer than stress profiles or cortisol curves could express.

I ran a session with no patients. No monitors. No prompts. Just fluorescent hum and the sound of my own breath against the mask. I looped the overhead call system with a single phrase: “He’s not responding.” Twenty minutes. Forty. Ninety. I wrote it down like data. Then I crossed it out.

I replaced the defibrillator pads with paper. I filled the crash cart with photographs of people I no longer speak to. I let the vitals flatline and waited for someone to notice.

Nobody did.

In the final trial, I flooded the scenario with contradictory inputs: a dead child breathing, a grieving parent laughing, a patient intubated in reverse. The participants panicked. I think. I’m not sure if they were real.

One of them asked me if I was okay. I said, “Of course. This is the most accurate version we’ve ever built.”

I remember meaning it.

Following concerns raised during an unscheduled scenario review, the FASM was suspended pending internal audit. Key documentation was found to be incomplete, and multiple unauthorised session logs contained inconsistencies in both participant data and facilitator identity. Several trainees reported unusual environmental cues and unresolved scenarios. No adverse events were formally recorded.

The lead developer is no longer affiliated with the programme.

No further iterations of the model are planned.


Comments

Leave a comment