Collegium:Imperium System
Overview
This document outlines the mission plan for building and testing the Imperium system, a distributed data processing pipeline. Each mission is designed to be executed in a separate thread, with phased OODA (Observe, Orient, Decide, Act) loops, and validated via independent tests upon completion.
Mission List
Mission Name | Description | Key Tools & Objectives |
---|---|---|
Phase 1: Pomerium & Internal Network Foundation | ||
NFS Setup on Roma, Horreum, and Torta | Configure NFS mounts to unify Roma, Horreum, and Torta's "grana" drive (~698 GB) as a single logical system within the Pomerium. This enables seamless file sharing and supports the "single machine" goal. |
|
NFS-Plus GPU Dispatching | Extend the NFS setup to enable scripts on Roma or Torta to dispatch GPU-intensive tasks to Horreum’s NVIDIA RTX 5060 Ti, using the shared filesystem for data access. |
|
Phase 2: Aquaeductus & External Pipeline Foundation | ||
Preparing Dockers and Directories on Latium and Torta | Set up the Docker containers (Pomerium, Campus Martius, Flamen Martialis) on Latium and the required directory structure on Torta's "aqua" drive (`/mnt/aqua/aqua_datum_raw`) for pipeline operations. |
|
Simple Data Diodes & RSYNC Optimization | Establish a fast, secure, one-way data flow (the Aquaeductus) from Latium to Torta using RSYNC over the WireGuard tunnel. This replaces SCP to avoid bottlenecks. |
|
Firejail/Bubblewrap Sandboxing | Deploy Firejail on Latium to sandbox the `Flamen Martialis` script, ensuring secure processing of external `aqua_datum` by restricting its filesystem and network access. |
|
Phase 3: End-to-End Pipeline Integration & Testing | ||
Flamen Martialis and Salii Separation | Implement the full Flamen/Salii workflow: Flamen on Latium collects and sanitizes `aqua_datum`, which is then transferred to Torta. Salii on Roma detects the new data and orchestrates internal processing. |
|
JSONPlaceholder Data Pipeline Test | Test the full, simple pipeline using JSONPlaceholder’s mock API, simulating the data flow from Latium -> Torta -> Roma -> OodaWiki. This validates the entire end-to-end architecture. |
|
NOTAM Data Pipeline Test | Test the pipeline with a more complex, authenticated source using the FAA's NOTAM API. This focuses on scheduled pulls and performance with real-world data. |
|
Phase 4: Optimization & Future Development | ||
Tar + Netcat (nc) Implementation | Implement and benchmark `tar` + `nc` for large, one-time "burst" transfers and compare its performance to RSYNC to establish a decision matrix for future pipeline tool selection. |
|
Supabase Integration | Integrate Supabase as an optional, advanced filtering buffer for `aqua_datum`, using its edge functions and Row-Level Security (RLS) to validate data before it enters the Aquaeductus. |
|
Automation/Standardized Deployment Script | Develop a `creo_castellum.sh` CLI script to automate the setup of new data pipelines across Imperium, based on the lessons learned from the manual builds. |
|