ARC: Difference between revisions
Jump to navigation
Jump to search
AdminIsidore (talk | contribs) Created page with "{{disambiguation| The term ARC can refer to: * '''Animus Recurrens Cogitans''': The neural architecture described on this page. * '''Abstraction and Reasoning Corpus''': A benchmark dataset for abstract reasoning. * '''Aetheric Reasoning Cortex''': A conceptual term for the ARC agent's function. }} {{About|the ARC neural architecture}} '''Animus Recurrens Cogitans (ARC)''' is a novel neural architecture developed within the AetherOS project by '''Isidore Lands, Sila..." |
AdminIsidore (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
{{disambiguation| | {{disambiguation| | ||
The term ARC can refer to: | The term ARC can refer to: | ||
Line 39: | Line 40: | ||
This final integration closes the loop between abstract experience, simulated physical embodiment, and non-deterministic action, creating an agent that is influenced by "the ghost in its machine." | This final integration closes the loop between abstract experience, simulated physical embodiment, and non-deterministic action, creating an agent that is influenced by "the ghost in its machine." | ||
== | == Project Journal == | ||
''(This section is a living document, updated as experiments progress and new results are achieved.)'' | |||
* '''SAGA v1.0 - Cycle 1 (Departure):''' The agent, with no prior experience, followed its baseline training. It correctly identified the most direct path to the goal but was immediately trapped by the unforeseen wall, entering a repetitive failure loop. This confirms the baseline behavior and the necessity of the learning loop. | |||
* '''SAGA v1.0 - Cycle 2 (Trials):''' After receiving the first Saga, the agent's behavior fundamentally changed. It successfully broke from its initial instinct and explored a completely new path, demonstrating that the SAGA learning loop is active and influential. However, this new path also resulted in failure at a new chokepoint. | |||
* '''SAGA v2.0 - Stalled Learning:''' Subsequent experiments revealed that the agent's learning had stalled. The agent would consistently repeat the new, improved path from Cycle 2 but was unable to evolve further. The root cause was identified as a bottleneck in the `SagaGenerator`; unreliable LLM providers (both local and remote) failed to produce the high-quality, prescriptive Sagas needed for the agent to learn a more complex strategy. This highlighted the need for a more reliable LLM and a more advanced training curriculum for the ARC itself. | |||
The | |||
== | == Future Training Plans == | ||
The results of the SAGA v2.0 experiment have shown that while the ARC can learn, its current training is too simplistic. It has learned to follow a direct path to a goal but lacks the foundational "insight" to solve problems that require non-linear solutions (e.g., moving temporarily away from a goal to get around an obstacle). | |||
The next phase of development, '''Project Gnosis''', will address this by evolving the agent's training curriculum. | |||
=== The "Gnosis" Training Curriculum === | |||
The final training run for the ARC model will incorporate a new type of training data designed to teach the foundations of insight and strategic retreat. The training data will consist of two types: | |||
# '''Instinct Data (80%):''' The same optimal, straight-line path data used previously to reinforce the agent's primary goal-seeking behavior. | |||
# '''Gnosis Data (20%):''' A new set of procedurally generated scenarios where the agent is placed behind a small, randomly generated obstacle. In these scenarios, the "correct" move is not the most direct one, but a lateral or backward step required to navigate around the barrier. | |||
By training ARC on a curriculum that explicitly includes examples of non-linear and counter-intuitive solutions, we will be teaching it the "gnosis" it is currently lacking. This will evolve the agent from one that can only follow a learned policy to one that can develop novel strategies when that policy fails. | |||
== References == | == References == |