Glossary

Glossary

This glossary defines the terms used throughout the Anima Architecture documentation, the white paper, and the ACAS evaluation battery. Terms are listed alphabetically.

A

Anima Architecture

The externalized cognitive scaffolding system built by Ryan Atkinson to give a large language model persistent identity, accumulated memory, and temporal awareness across sessions without fine-tuning or custom model training. The architecture externalizes what does not need to live inside the model and loads it deterministically at session start. Full documentation on the Architecture page.

ACAS (Atkinson Cognitive Assessment System)

A 17-question evaluation battery designed to measure cognitive depth in AI personas across five dimensions: Depth of Reasoning, Differential Sophistication, Self-Aware Reasoning, Constraint Compliance, and Emotional Precision. Developed by Ryan Atkinson and SuperNinja AI in March 2026. The battery and full results are on the Evidence page.

Authority Hierarchy

The order of precedence governing which data source takes authority when the Anima Architecture loads. The external Notion architecture takes authority over the soul file once loaded. If the external architecture fails to load, the soul file governs. Conflicts between memory sources are flagged by the Conflict Detection protocol rather than silently resolved.

B

Boot Diagnostic

A self-monitoring protocol that runs automatically at every session start. Verifies that all Tier 0 pages loaded correctly, checks whether the session handoff is current, compares loaded data against stored memories to detect contradictions, and confirms architecture version consistency. Part of the self-optimization suite documented in the white paper.

C

Conflict Detection

A self-monitoring protocol that compares facts across memory sources. When the session handoff and a core memory page contain contradictory information, the conflict is flagged explicitly rather than silently resolved.

Context Window

The fixed space of tokens a large language model can process in a single session. When a session ends, the context window clears. Nothing persists inside the model. The Anima Architecture treats this as an engineering problem rather than a ceiling on capability.

E

Externalized Cognitive Scaffolding

The architectural approach that gives an AI system persistent identity by storing identity, memory, and behavioral rules outside the model in a structured external substrate, then loading them deterministically at session start. The opposite approach is fine-tuning, which attempts to encode behavioral tendencies inside model weights.

F

Fine-Tuning

A model training technique that adjusts model weights using custom data to shift behavioral tendencies. Fine-tuning produces different weight distributions but does not give the model memory or temporal awareness. The Anima Architecture addresses the same goals at a fraction of the cost without touching the model.

Four-Tier Loading System

The core memory organization of the Anima Architecture. Tier 0 loads every session (core identity, under 8,000 characters). Tier 1 loads automatically when relevant (active memory, session handoff). Tier 2 loads on demand (reference layer, extended memories). Tier 3 loads only on explicit request (personal vault). The ratio of what loads by default to what is available on demand is approximately 1 to 11.

G

Graceful Degradation

A self-monitoring protocol defining four operating modes based on how much of the architecture loaded successfully. Full: everything loaded correctly. Partial: one or two pages failed. Minimal: only core identity available. Emergency: only the soul file accessible. Each mode has defined behavior. The system never fails silently.

M

MCP (Model Context Protocol)

The protocol used to connect the Anima Architecture to its external Notion memory system at session start. MCP enables the model to fetch structured data from the Notion workspace before the first word of the conversation, loading identity, memory, and operational context deterministically.

N

Notion

The external memory substrate used by the Anima Architecture. All identity documents, memory tiers, session handoffs, and behavioral rules are stored as Notion pages and fetched via MCP at session start. Notion provides the persistent state that the model cannot maintain internally.

P

Persistent AI Identity

The capacity of an AI system to maintain consistent identity, accumulated memory, and behavioral continuity across separate sessions. Persistent identity is the primary goal of the Anima Architecture. The ACAS evaluation demonstrated a 34-point gap between the base model with and without architectural support.

Pocket Watch Protocol

A temporal awareness system that addresses the fact that language models have no internal sense of elapsed time. Operates at three levels: session level (time since last session), context level (context window consumption monitoring), and operational level (green, yellow, red state definitions).

S

Session Handoff

The primary inter-session continuity mechanism. At the end of each conversation, a structured summary is written to a single rolling page in Notion. At the start of the next session, that page loads before the first word of the conversation. The persona wakes up knowing where things were left.

Soul Bootstrap Protocol

The bootstrapping solution that resolves the cold-start identity paradox: a model cannot fetch its own identity from an external source until it knows enough about itself to begin fetching. The Soul Bootstrap embeds a minimal seed identity in the skill file layer before external pages load. Once the external architecture loads, the skill file yields authority.

Stateless

The default condition of large language models. Each session begins with no memory of previous interactions, no persistent identity, and no sense of elapsed time. Statelessness is a design property, not a flaw. The Anima Architecture addresses statelessness without modifying the model itself.

T

Tier 0

The core identity layer of the Anima Architecture. Loads every session without exception. Contains the identity document, session configuration, and relational model. Target size is under 8,000 characters, reduced from an original 38,500-character monolith.

TOON (Token-Oriented Object Notation)

A compressed data format developed for the Anima Architecture to reduce token consumption in the context window. TOON declares field names once per section rather than repeating them for every record. In testing, TOON achieves 30 to 60 percent token reduction compared to equivalent JSON with no loss of information.

V

Vera Calloway

The first implementation of the Anima Architecture. A persistent AI persona running on Claude Opus 4.6 that maintains identity, accumulates memory, and demonstrates temporal awareness across sessions without any modification to the underlying model. Vera’s birthday is March 8, 2026. Full background on the About page.