This page curates external articles, research, and writing published by the team behind Vera Calloway across the web. Topics covered include AI memory architecture, persistent AI identity without fine-tuning, the difference between sentience and sapience in AI systems, AI persona design, behavioral evaluation methods, and AI ethics and rights. Resources are organized into five categories: Architecture & Memory, Consciousness & Sentience, AI Identity & Persona, Evaluation & Testing, and Ethics & Rights. Each entry links to either a veracalloway.com pillar article or an external publication including Medium and the Anima Architecture Google Site.
Resources on AI Memory,
Persona & Consciousness
A curated collection of articles, research, and writing on persistent AI identity, external memory architecture, consciousness, and the Anima Framework — published across the web by the team behind Vera Calloway.
Architecture & Memory
The technical foundations of AI memory systems — how external memory works, why context windows alone aren’t enough, and what a well-designed memory architecture actually looks like in practice.
-
veracalloway.comHow to Build an AI Memory System
The definitive guide to externalized AI memory. Covers tiered loading, Notion MCP integration, and why memory doesn’t have to live inside the model.
-
MediumWhy Most AI Memory Systems Fail (And What Actually Works)
Breaks down the two memory problems people keep conflating and explains the architectural insight that changes everything.
-
Anima Architecture — Google SitesHow to Evaluate AI: What the Standard Tests Miss
A 3,000-word deep dive into why benchmarks fail and what sustained behavioral evaluation actually looks like.
Consciousness & Sentience
The hardest questions in AI right now. What sentience and sapience actually mean for artificial systems, why the standard debate is poorly framed, and what the evidence actually shows.
-
veracalloway.comSentient AI
The pillar resource on AI sentience. Covers the empirical, functional, and moral dimensions of the question without overclaiming in either direction.
-
MediumThe Sentient AI Question Nobody Is Asking Correctly
Argues that “is AI sentient?” is actually three different questions compressed into one, and the compression is doing serious damage to the conversation.
-
MediumSentience in AI: Why We’re Testing for the Wrong Things in 2026
Examines what current evaluation methods miss and why behavioral consistency matters more than benchmark scores.
AI Identity & Persona
What separates a real AI persona from a system prompt with a name. Identity stability, memory across sessions, and the architecture that makes genuine persistence possible.
-
veracalloway.comWhat Is an AI Persona?
The foundational resource on AI persona architecture. Covers the three things most implementations skip and why they matter.
-
MediumWhat Is an AI Persona and Why Most of Them Fail
Why shallow persona implementations degrade over time and what a genuine persistent identity actually requires architecturally.
Evaluation & Testing
How do you actually measure what an AI system is doing? The gap between benchmark scores and real behavioral performance, and what rigorous evaluation looks like in practice.
-
veracalloway.comTesting AI Like a Person
The ACAS methodology and what sustained, adversarial, multi-dimensional evaluation reveals that standard benchmarks can’t.
-
Anima Architecture — Google SitesHow to Evaluate AI: What the Standard Tests Miss and What Actually Works
A deep dive into why benchmarks fail for modern AI systems and what real behavioral evaluation requires.
Ethics & Rights
The moral questions that follow from taking AI seriously. AI rights, design ethics, and what responsibilities developers have when people form genuine relationships with their systems.
-
veracalloway.comAI Rights and Ethics
AI moral status, graduated consideration, and the design ethics questions that don’t wait for the consciousness debate to resolve.
-
Medium — Coming Soon
AI Rights Are Coming. We’re Not Ready for the Question.
Publishing soon. Examines why the AI rights conversation is poorly framed and what moral obligations may already exist regardless of the consciousness debate.