Collegium:Imperium System

From OODA WIKI
Revision as of 22:14, 25 September 2025 by AdminIsidore (talk | contribs) (Created page with "{{DISPLAYTITLE:Imperium System Mission Plan}} {{italic title}} == Overview == This document outlines the mission plan for building and testing the Imperium system, a distributed data processing pipeline. Each mission is designed to be executed in a separate thread, with phased OODA (Observe, Orient, Decide, Act) loops, and validated via independent tests upon completion. == Mission List == {| class="wikitable" |+ ! Mission Name ! Description ! Key Tools & Objectives |-...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Italic title

Overview

This document outlines the mission plan for building and testing the Imperium system, a distributed data processing pipeline. Each mission is designed to be executed in a separate thread, with phased OODA (Observe, Orient, Decide, Act) loops, and validated via independent tests upon completion.

Mission List

Mission Name Description Key Tools & Objectives
Phase 1: Pomerium & Internal Network Foundation
NFS Setup on Roma, Horreum, and Torta Configure NFS mounts to unify Roma, Horreum, and Torta's "grana" drive (~698 GB) as a single logical system within the Pomerium. This enables seamless file sharing and supports the "single machine" goal.
  • Tools: nfs-kernel-server, nfs-common
  • Objective: Full read/write access to the "grana" share on Torta from both Roma and Horreum, verified via file creation and listing.
NFS-Plus GPU Dispatching Extend the NFS setup to enable scripts on Roma or Torta to dispatch GPU-intensive tasks to Horreum’s NVIDIA RTX 5060 Ti, using the shared filesystem for data access.
  • Tools: NFS, CUDA Toolkit, SSH
  • Objective: Successfully run a sample GPU task (e.g., a small AI inference) initiated from Roma that reads/writes data on the NFS share and executes on Horreum's GPU.
Phase 2: Aquaeductus & External Pipeline Foundation
Preparing Dockers and Directories on Latium and Torta Set up the Docker containers (Pomerium, Campus Martius, Flamen Martialis) on Latium and the required directory structure on Torta's "aqua" drive (`/mnt/aqua/aqua_datum_raw`) for pipeline operations.
  • Tools: Docker, .bashrc, Shell scripts
  • Objective: Functional containers and directories, tested by mock commands in each context.
Simple Data Diodes & RSYNC Optimization Establish a fast, secure, one-way data flow (the Aquaeductus) from Latium to Torta using RSYNC over the WireGuard tunnel. This replaces SCP to avoid bottlenecks.
  • Tools: RSYNC, WireGuard, cron
  • Objective: A one-way push of `aqua_datum` to Torta's "aqua" drive, tested by verifying transfer speed and the inability to initiate a connection from Torta back to the originating service on Latium.
Firejail/Bubblewrap Sandboxing Deploy Firejail on Latium to sandbox the `Flamen Martialis` script, ensuring secure processing of external `aqua_datum` by restricting its filesystem and network access.
  • Tools: Firejail, Python
  • Objective: A sandboxed mock script with verifiably restricted access to the host system.
Phase 3: End-to-End Pipeline Integration & Testing
Flamen Martialis and Salii Separation Implement the full Flamen/Salii workflow: Flamen on Latium collects and sanitizes `aqua_datum`, which is then transferred to Torta. Salii on Roma detects the new data and orchestrates internal processing.
  • Tools: Python, RSYNC, NFS, SSH
  • Objective: Flamen Martialis successfully collects `aqua_datum`, and Salii on Roma successfully processes it to `grana_datum` on the NFS share.
JSONPlaceholder Data Pipeline Test Test the full, simple pipeline using JSONPlaceholder’s mock API, simulating the data flow from Latium -> Torta -> Roma -> OodaWiki. This validates the entire end-to-end architecture.
  • Tools: Python, RSYNC, NFS, pywikibots
  • Objective: A complete data cycle, verified by seeing the mock data correctly published on an OodaWiki page.
NOTAM Data Pipeline Test Test the pipeline with a more complex, authenticated source using the FAA's NOTAM API. This focuses on scheduled pulls and performance with real-world data.
  • Tools: Python, RSYNC, NFS
  • Objective: Successful and reliable ingestion of NOTAM data, stored as `grana_datum` on the NFS share.
Phase 4: Optimization & Future Development
Tar + Netcat (nc) Implementation Implement and benchmark `tar` + `nc` for large, one-time "burst" transfers and compare its performance to RSYNC to establish a decision matrix for future pipeline tool selection.
  • Tools: Tar, Netcat, WireGuard
  • Objective: A functional burst transfer and a documented decision matrix for when to use RSYNC vs. `tar` + `nc`.
Supabase Integration Integrate Supabase as an optional, advanced filtering buffer for `aqua_datum`, using its edge functions and Row-Level Security (RLS) to validate data before it enters the Aquaeductus.
  • Tools: Supabase client libraries, REST APIs
  • Objective: A validated data push/pull through Supabase, tested with a mock schema and RLS policy.
Automation/Standardized Deployment Script Develop a `creo_castellum.sh` CLI script to automate the setup of new data pipelines across Imperium, based on the lessons learned from the manual builds.
  • Tools: Bash/Python, Docker, NFS
  • Objective: A script that can provision a new data pipeline with a single command, tested by deploying a new mock project.