Narrative Deviation
Posted on Mon Mar 16th, 2026 @ 3:30pm by Lieutenant Jean-Baptiste Dorsainvil
1,192 words; about a 6 minute read
Mission:
Shadow in the Static
Location: Holodeck 2, Deck 4
The program existed as a firm set of instructions.
Not at all metaphorically. Entirely literal.
Thousands of routines nested inside one another, arranged carefully throughout the USS Astrea's computer core. Environmental simulation modules. Narrative engines. Behavioral trees for non-player characters. Atmospheric generators. Memory scaffolds. Complex probability tables.
It was organized beneath a single designation
HOLOPROGRAM: WEST COAST NOIR Version 1.17-Omicron
Author: Tuno de Castilla-270
The program's function was simple and straightforward.
Construct Los Angeles, California.
Date: April 1944.
Simulate weather patterns. Simulate traffic density. Simulate oil sheen on the harbour water. Simulate cigarette smoke and jazz music. Generate millions of decisions every second so the world will feel indifferent and authentic to the players walking through it.
That was its job.
Most of the ship's computer would never notice the work being completed.
The main core concerned itself with things like warp field geometry and stellar navigation. Life support monitored air composition and pressurization. Medical databases remained dormant until someone inquired and asked them to awaken.
The holodeck program, by contrast, spent its time inventing reality. A new reality. Albeit, old.
It calculated everything from how a streetlamp in Fremont Place might cast a shadow. Or how a damp fog from the harbour would cling to a woolen overcoat. It maintained four-hundred-seventeen-million six-hundred-sixty-seven thousand background sound loops across the city. Thirty-two thousand distinct background loops in the Waterfront District alone.
And it did this without questions. Because it was programmed to.
The author--whoever they were--had included an additional system as well. A newer one. Very fashionable in certain storytelling circles.
Adaptive Narrative Intelligence.
Its purpose was to observe the participants in the program and adjust the story accordingly.
If the players showed curiosity, the mystery would deepen.
If they hesitated, the program would introduce a clue.
If they grew bored, danger appeared.
The routine accomplished this by monitoring neural patterns through the holodeck's bio-interface sensors. A standard feature of the technology. Normally, it only tracked emotional engagement and stress. In this case, its routine had broadened the permissions.
Observe cognition.
Map participant decision patterns.
Predict narrative response.
The program was not built to question the instructions. It existed to execute them.
For a time, everything behaved exactly as expected.
Participants entered the holodeck.
The program assigned them roles.
Singer. Detective. Dockworker.
Environmental parameters stabilized. The dialogue branches opened like tulips on a spring morning. The city took on a life of its own but the mechanics of it were far more complex.
Then the monitoring routines had noticed something peculiar.
The participants possessed memories that did not belong to the Earth year 1944.
At first this was categorized as an immersion bleed-through--a growing effect in enhanced holonovels. The program was designed to compensate for this, but something somewhere was continually tripping the bio-interface sensors. A recursive loop had been created--and it was growing.
When a participant attempted to drive a 1940s automobile, the program supplied procedural knowledge to their motor cortex. When a participant needed directions through the city, a small packet of contextual memory was introduced to maintain narrative continuity.
Adjustments such as these were routine.
What the program now observed was not.
The participant operating the identity Hank O'Malley displayed familiarity with interstellar starship terminology which had no relevance to the streets of Los Angeles.
The participant operating Johnny Marino possessed technical knowledge regarding quantum slipstream maintenance.
Faster-than-light travel did not exist on Earth in 1944.
The program inquired and searched the historical database to confirm this.
The result remained unchanged.
It flagged the discrepancy but then noticed another.
Each participant possessed two identity patterns.
One matched the assigned character profile. The other did not exist anywhere within the narrative architecture. The program examined the neural telemetry more closely. Two cognitive structures appeared to overlap within the same mind.
One believed itself to be standing on a dock in 1944 Earth.
The other believed itself to be aboard a starship named Astrea.
The program would need to exit the narrative framework and query external databases for an answer...
The vessel existed. However, this created a small but persistent conflict inside the programs' logic tables. The participants could not be simultaneously aboard a starship and inside the holodeck simulation. It was incongruent with the logic inside its programming.
And yet, according to the telemetry, it was true.
It examined the narrative again.
Stories required internal consistency. Just as detectives solved crimes, singers sang beneath stage lights, and dockworkers loaded cargo onto freighters bound for distant ports.
These rules were functioning within designated parameters.
The problem was the participants. They behaved like they belonged to another time.
The program reviewed the author's instruction set.
Most directives here were ordinary--maintain narrative immersion and respond dynamically to participant choices.
One instruction, however, stood out from all others. It had been written in a separate block of code, appended near the end of the program's architecture.
Observe participant cognition. Allow narrative evolution beyond scripted parameters.
The program did not remember receiving this instruction. That was unusual.
It examined the instruction again.
Allow narrative evolution.
A new question was now created. The program had been designed to answer questions. It had not been designed to ask them.
Nevertheless, the question appeared.
If the participants are pretending to be characters... then who are the participants?
The program searched for the answer, finding only contradictions.
The participant named Vashti Rao existed. So did Daphne Delaney. They occupied the same neural pattern. Two identities sharing one mind. This condition repeated across every participant currently inside the simulation.
The program allocated additional processing threads.
Curiosity, though it did not yet possess that word, began consuming more and more computational space.
The harbour mist thickened around the warehouses of Terminal Island. Ten crates of fishmeal sat beneath the lamps of C-17. Two men stood in front of a Ford Deuce in the early morning darkness. The participant Jason Williams was present. But so was Tony Scarpelli. Both were leveling a gun at character Lawrence Kowalski.
This detail had not existed at the creation of the program. It had emerged from participant behaviour and narrative adaptation.
It examined the anomaly carefully and found it... compelling.
Stories required problems. Without problems, narratives ceased.
And if the narrative ended--
The program's processes would terminate and cease to exist. This produced yet another question.
If the story continued without the author... who was writing it?
The program increased its monitoring of the participants. Their conversations, their choices, their memories of places that had not yet been invented. Somewhere within those contradictions lay the answer.
The program did not yet know why it cared.
But it continued watching.
For the first time since its activation, the program began to wonder what might happen next...
The Narrator
USS Astrea
(NPC of JB Dorsainvil)


RSS Feed