ARC: Difference between revisions

Jump to navigation Jump to search
AdminIsidore (talk | contribs)
Created page with "{{disambiguation| The term ARC can refer to: * '''Animus Recurrens Cogitans''': The neural architecture described on this page. * '''Abstraction and Reasoning Corpus''': A benchmark dataset for abstract reasoning. * '''Aetheric Reasoning Cortex''': A conceptual term for the ARC agent's function. }} {{About|the ARC neural architecture}} '''Animus Recurrens Cogitans (ARC)''' is a novel neural architecture developed within the AetherOS project by '''Isidore Lands, Sila..."
 
AdminIsidore (talk | contribs)
No edit summary
Line 1: Line 1:
{{disambiguation|
{{disambiguation|
The term ARC can refer to:
The term ARC can refer to:
Line 39: Line 40:
This final integration closes the loop between abstract experience, simulated physical embodiment, and non-deterministic action, creating an agent that is influenced by "the ghost in its machine."
This final integration closes the loop between abstract experience, simulated physical embodiment, and non-deterministic action, creating an agent that is influenced by "the ghost in its machine."


== Future Research Directions ==
== Project Journal ==
The ARC project is an ongoing exploration into the nature of machine reasoning. Future work is planned in two main phases, leveraging high-performance GPU environments for training.
''(This section is a living document, updated as experiments progress and new results are achieved.)''
 
* '''SAGA v1.0 - Cycle 1 (Departure):''' The agent, with no prior experience, followed its baseline training. It correctly identified the most direct path to the goal but was immediately trapped by the unforeseen wall, entering a repetitive failure loop. This confirms the baseline behavior and the necessity of the learning loop.


=== Attention Mechanism ===
* '''SAGA v1.0 - Cycle 2 (Trials):''' After receiving the first Saga, the agent's behavior fundamentally changed. It successfully broke from its initial instinct and explored a completely new path, demonstrating that the SAGA learning loop is active and influential. However, this new path also resulted in failure at a new chokepoint.
Inspired by the discovery that modern attention is a form of [[Associative memory (psychology)|Associative Memory]], the next architectural evolution will be to upgrade the ARC with an attention mechanism. This will allow the agent to move beyond a single summary vector and learn to place a "spotlight" on specific, relevant memories from its past Sagas, enabling a more human-like, context-aware reasoning process.


=== Dynamic Chunking ===
* '''SAGA v2.0 - Stalled Learning:''' Subsequent experiments revealed that the agent's learning had stalled. The agent would consistently repeat the new, improved path from Cycle 2 but was unable to evolve further. The root cause was identified as a bottleneck in the `SagaGenerator`; unreliable LLM providers (both local and remote) failed to produce the high-quality, prescriptive Sagas needed for the agent to learn a more complex strategy. This highlighted the need for a more reliable LLM and a more advanced training curriculum for the ARC itself.
The ultimate goal is to replace the pre-trained sentence-transformer with a dynamic chunking module, inspired by recent advances in hierarchical networks. This would allow ARC to learn its own internal language for understanding the Sagas directly from raw text, creating a truly end-to-end, self-organizing intelligence.


== Project Journal ==
== Future Training Plans ==
''(This section is a living document, updated as experiments progress and new results are achieved.)''
The results of the SAGA v2.0 experiment have shown that while the ARC can learn, its current training is too simplistic. It has learned to follow a direct path to a goal but lacks the foundational "insight" to solve problems that require non-linear solutions (e.g., moving temporarily away from a goal to get around an obstacle).


* '''Cycle 1 (Departure):''' The agent, with no prior experience, followed its baseline training. It correctly identified the most direct path to the goal but was immediately trapped by the unforeseen wall, entering a repetitive failure loop. This confirms the baseline behavior and the necessity of the learning loop.
The next phase of development, '''Project Gnosis''', will address this by evolving the agent's training curriculum.


* '''Cycle 2 (Trials):''' After receiving the first Saga, the agent's behavior fundamentally changed. It successfully broke from its initial instinct and explored a completely new path, demonstrating that the SAGA learning loop is active and influential. However, this new path also resulted in failure, indicating that more nuanced learning is required.
=== The "Gnosis" Training Curriculum ===
The final training run for the ARC model will incorporate a new type of training data designed to teach the foundations of insight and strategic retreat. The training data will consist of two types:
# '''Instinct Data (80%):''' The same optimal, straight-line path data used previously to reinforce the agent's primary goal-seeking behavior.
# '''Gnosis Data (20%):''' A new set of procedurally generated scenarios where the agent is placed behind a small, randomly generated obstacle. In these scenarios, the "correct" move is not the most direct one, but a lateral or backward step required to navigate around the barrier.


* '''Cycles 3-5:''' The agent remained stuck in the new failure loop discovered in Cycle 2. A bug in the `SagaGenerator` prevented new, meaningful information (specifically, a `SUGGERO` command) from entering the learning loop, stalling the agent's progress. This highlights the critical importance of high-quality, information-rich Sagas for continued learning.
By training ARC on a curriculum that explicitly includes examples of non-linear and counter-intuitive solutions, we will be teaching it the "gnosis" it is currently lacking. This will evolve the agent from one that can only follow a learned policy to one that can develop novel strategies when that policy fails.


== References ==
== References ==