Technical White Paper

The Parallax Protocol

A Self-Healing, N-Version Programming Methodology Utilising Generative AI and Object Constraint Language (OCL) for Consensus-Based Software Reliability

Abstract

Problem: Traditional N-Version Programming (NVP) provides high reliability but is cost-prohibitive due to the manual labour of writing multiple distinct implementations.

Solution: A novel methodology leveraging Large Language Models (LLMs) to generate distinct implementations (C, Java, Python) from a single OCL-defined specification.

Mechanism: These implementations are executed in parallel within sandboxed environments. Their outputs are arbitrated by a Majority Voting Logic. Anomalous implementations are identified and auto-patched via a recursive feedback loop.

Result: A dramatic reduction in the cost of NVP, enabling “Mission Critical” reliability for standard software applications.

1. Introduction

1.1 The Fragility of Single-Path Coding

Traditional software development follows a “single-path” approach: one specification leads to one implementation. This creates a critical dependency on the correctness of both the specification interpretation and the implementation. Bugs introduced at any stage propagate undetected until they manifest in production.

1.2 The Promise of AI-Driven Redundancy

The advent of Large Language Models capable of generating syntactically correct code in multiple programming languages presents an unprecedented opportunity. We can now generate multiple independent implementations at marginal cost, enabling redundancy patterns previously reserved for safety-critical industries.

1.3 Thesis Statement

“Reliability is not found in the code, but in the consensus between differing implementations of the same logic.”

2. The Methodology (The Parallax Process)

2.1 Specification Definition: The “Golden Spec”

The process begins with defining a “Golden Specification” using Natural Language combined with Object Constraint Language (OCL). This structured schema combines intent with rules.

golden_spec.yaml
spec_id: "LNS-402"
function_name: "calculate_amortization"
description: "Calculates monthly payment for a fixed-rate loan."
inputs:
  - name: "principal"
    type: "float"
  - name: "rate_annual"
    type: "float"
  - name: "months"
    type: "integer"
constraints:
  pre:
    - "self.principal > 0"
    - "self.rate_annual >= 0"
    - "self.months > 0"
  post:
    - "result > 0"
    - "result < self.principal"
  invariant:
    - "self.principal >= result"

2.2 Polyglot Generation

The Orchestrator uses MCP (Model Context Protocol) to isolate LLM agents into specific language roles. Each agent receives the same specification but generates code in its designated language, ensuring semantic equivalence with syntactic independence.

C

Agent A: The C Smith

Java

Agent B: The Java Architect

Python

Agent C: The Python Scribe

2.3 The Tribunal (Voting Logic)

The consensus engine implements multiple voting strategies:

UnanimousAll N versions agree. Result accepted with highest confidence.
Majority>50% agree. Result accepted; anomaly flagged for remediation.
WeightedHistorical accuracy influences vote weight dynamically.

3. System Architecture

The Orchestrator (Python/Go)

The central brain that manages the complete lifecycle: Generate → Compile → Execute → Vote → Patch. Coordinates all agents and maintains state.

The Fabricators (LLM API)

Distinct system prompts for C, Java, Python (and potentially Rust/Go for higher N). Each operates in isolation to prevent “contamination” of logic.

The Proving Grounds (Containerised Sandbox)

  • Ephemeral Docker containers for each execution
  • Network isolated (no internet access for generated code)
  • Resource limits (CPU/RAM) to normalise performance metrics

The Observatory (ELK + Grafana)

  • Logstash: Ingests STDOUT, STDERR and Compiler Logs from all containers
  • Elasticsearch: Indexes the “Voter Logs”
  • Grafana: The “Mission Control” Dashboard

4. Observability and Telemetry

The Grafana dashboard provides real-time visibility into the health of the “democracy”:

Consensus Rate

Gauge showing % of executions where all N versions agreed

Hallucination Index

Bar chart showing which language deviates most often

Self-Healing Velocity

Line graph tracking patch loops required to reach consensus

Performance Delta

Overlay of execution time across all language runtimes

5. Claims of Novelty

The following claims define the specific technical innovations of the Parallax Protocol. See the full Prior Art documentation for detailed specifications.

Claim 5.1:Automated Generation of Heterogeneous Implementation Logic via Constraint Injection
Claim 5.2:Synchronous Execution and Arbitration of Heterogeneous Runtime Environments
Claim 5.3:Recursive Remediation via Cross-Referenced Consensus Feedback
Claim 5.4:Telemetry-Based Divergence Analysis and Hallucination Quantification
Claim 5.5:The “Triumvirate” Architecture for Security Critical Operations

6. Conclusion

The Parallax Protocol represents a fundamental shift from Test-Driven Development (TDD) to Consensus-Driven Development (CDD). By leveraging the economic advantages of LLM-based code generation combined with the reliability guarantees of N-Version Programming, we enable mission-critical software reliability for standard applications.

Why This is Robust

  • Semantic Independence: Bugs in C (memory leaks) rarely replicate in Java (Garbage Collection) or Python (Dynamic Typing). Crossing the language barrier filters out language-specific classes of errors.
  • AI Arbitration: LLMs excel at “Compare and Repair.” They need not be perfect coders; they need to be perfect patchers when guided by a consensus engine.
  • OCL Anchoring: The OCL specification prevents LLMs from hallucinating features. It forces adherence to a mathematical contract.