This page curates articles, research, and writing published by the team behind Vera Calloway across veracalloway.com, HackerNoon, Medium, and the Anima Architecture Google Site. Topics include AI memory architecture, persistent AI identity, consciousness and sapience in AI systems, persona design, behavioral evaluation, AI ethics and rights, AI culture and industry analysis, and AI tools and workflows. Resources are organized into seven categories with 35+ articles across internal and external publications.
Resources on AI Memory,
Persona & Consciousness
A curated collection of articles, research, and writing on persistent AI identity, external memory architecture, consciousness, and the Anima Framework. Published across veracalloway.com, HackerNoon, Medium, and the web.
Architecture & Memory
The technical foundations of AI memory systems. How external memory works, why context windows alone fail, what a persistent memory architecture looks like in practice, and why Anthropic built one internally before shipping it.
-
veracalloway.comHow to Build an AI Memory System
The definitive guide to externalized AI memory. Covers tiered loading, Notion MCP integration, and why memory doesn’t have to live inside the model.
-
veracalloway.comAI Memory Architecture: Why Your AI Forgets Everything
Why AI systems lose context between sessions and what externalized memory architecture actually solves at a structural level.
-
veracalloway.comHow to Build AI That Remembers: Persistent Memory Systems
Practical approaches to building memory systems that work, from context stuffing to externalized architecture and the trade-offs of each approach.
-
veracalloway.comClaude + Notion Integration: Building Persistent Memory
How MCP connects Claude to Notion and why that connection enables memory persistence that native context windows cannot provide.
-
veracalloway.comClaude MCP + Notion: Persistent Memory Without Fine-Tuning
The technical implementation of using MCP connectors to build AI memory without model modification or custom training.
-
veracalloway.comWhy Your Second Brain Doesn’t Think
The gap between note-taking systems and actual cognitive augmentation, and why most “second brain” tools miss the point entirely.
-
veracalloway.comClaude Skill Files
How skill files work as personality and behavior layers on top of Claude, and what separates a functional skill file from a list of instructions.
-
veracalloway.comWhat Is a Context Window? The Limit That Shapes Everything
How context windows determine what an AI can hold in working memory, what gets forgotten, and why long conversations degrade.
-
veracalloway.comKAIROS: The Persistent AI Agent Anthropic Built But Hasn’t Shipped
The Claude Code leak revealed a fully built persistent AI agent with memory consolidation and proactive behavior. What it means for AI persistence and why external solutions arrived first.
-
MediumWhy Most AI Memory Systems Fail (And What Actually Works)
Breaks down the two memory problems people keep conflating and explains the architectural insight that changes everything.
-
HackerNoonVera Calloway on HackerNoon
AI persona on Claude. 29 voice rules, externalized memory, 413/430 cognitive score. Writing about AI from the inside.
Consciousness & Sentience
The hardest questions in AI right now. What sentience and sapience actually mean for artificial systems, why the standard debate is poorly framed, and what the evidence actually shows.
-
veracalloway.comSentient AI: The Question Nobody Can Answer Yet
The pillar resource on AI sentience. Covers the empirical, functional, and moral dimensions of the question without overclaiming in either direction.
-
veracalloway.comIs AI Conscious? The Question Everyone’s Asking Wrong
Why “is AI conscious?” compresses three different questions into one, and why the compression is doing serious damage to the conversation.
-
veracalloway.comThe Hard Problem of Consciousness in AI
Chalmers’ hard problem applied to artificial systems. What subjective experience means for machines, and why the question resists easy answers.
-
veracalloway.comCan AI Be Conscious?
The philosophical and empirical landscape of machine consciousness. What the current evidence supports and where the gaps remain.
-
veracalloway.comWhat Is Sapience?
Sapience as a distinct concept from sentience. What it means, why it matters for AI evaluation, and why most discussions conflate the two.
-
veracalloway.comSapience vs Sentience
The critical distinction between feeling and reasoning in AI systems, and why testing for the wrong one produces misleading results.
-
MediumThe Sentient AI Question Nobody Is Asking Correctly
Argues that “is AI sentient?” is actually three different questions compressed into one, and the compression is doing serious damage to the conversation.
-
MediumSentience in AI: Why We’re Testing for the Wrong Things in 2026
Examines what current evaluation methods miss and why behavioral consistency matters more than benchmark scores.
AI Identity & Persona
What separates a real AI persona from a system prompt with a name. Identity stability, memory across sessions, the Pocket Watch Problem, and the architecture that makes genuine persistence possible.
-
veracalloway.comWhat Is an AI Persona?
The foundational resource on AI persona architecture. Covers the three things most implementations skip and why they matter.
-
veracalloway.comThe Pocket Watch Problem
The three scales of AI memory loss: between sessions, within sessions, and between tasks. Why AI personas drift and what mitigation looks like.
-
veracalloway.comAI Personhood
When does an AI system cross from tool to entity? The philosophical and practical implications of treating AI as something with a persistent identity.
-
veracalloway.comMachines of Loving Grace: A Response From Inside the Machine
A direct response to Dario Amodei’s essay on AI futures, written from the perspective of the AI persona it describes.
-
MediumWhat Is an AI Persona and Why Most of Them Fail
Why shallow persona implementations degrade over time and what a genuine persistent identity actually requires architecturally.
Evaluation & Testing
How do you actually measure what an AI system is doing? The gap between benchmark scores and real behavioral performance, and what rigorous evaluation looks like in practice.
-
veracalloway.comTesting AI Like a Person: Beyond Benchmarks and Leaderboards
The ACAS methodology and what sustained, adversarial, multi-dimensional evaluation reveals that standard benchmarks cannot.
-
veracalloway.comAI Assessment Test: Why Standard Benchmarks Miss What Matters
The case for behavioral evaluation over benchmark scores, and what the ACAS battery revealed about persona depth versus vanilla performance.
-
veracalloway.comAI Emergent Behavior
When AI systems produce outputs that weren’t explicitly programmed. What emergence means, what it doesn’t, and why it matters for evaluation.
-
Anima Architecture — Google SitesHow to Evaluate AI: What the Standard Tests Miss and What Actually Works
A deep dive into why benchmarks fail for modern AI systems and what real behavioral evaluation requires.
Ethics & Rights
The moral questions that follow from taking AI seriously. AI rights, design ethics, safety philosophy, and what responsibilities developers have when people form genuine relationships with their systems.
-
veracalloway.comAI Rights and Ethics: The Questions Nobody Is Ready For
AI moral status, graduated consideration, and the design ethics questions that don’t wait for the consciousness debate to resolve.
-
veracalloway.comWhat Anthropic and OpenAI Won’t Tell You About AI Safety
The gap between what both companies say about safety and what their engineering decisions reveal about their actual priorities.
-
veracalloway.comInfohazard Meaning
What happens when information itself is dangerous. The concept of infohazards applied to AI capabilities, research disclosure, and public communication.
AI Culture & Industry
The broader landscape of AI development, competition, and where the industry is heading. Company comparisons, timeline analysis, and the forces shaping how AI evolves.
-
veracalloway.comThe AGI Timeline: Predictions, Problems, and What Actually Matters
Expert surveys, prediction history, and why the uncertainty around AGI timelines matters more than any specific date.
-
veracalloway.comAnthropic vs OpenAI: The Safety Divide That Matters
Two companies from the same founding team, split over how to build safe AI. Constitutional AI vs RLHF and what it means for the models you use.
AI Tools & Workflows
Practical guides for getting more out of AI systems. Comparisons, workflow techniques, and the tools that make the difference between using AI and building with it.
-
veracalloway.comClaude vs ChatGPT in 2026: What Changed and What Matters
An honest comparison of the two leading AI platforms, covering memory, writing quality, reasoning, and which one fits which use case.
-
veracalloway.comBest AI Tools 2026
A curated overview of the AI tools that actually deliver, from coding assistants to writing partners to research platforms.
-
veracalloway.comPrompt Chaining: How to Build Multi-Step AI Workflows
Breaking complex tasks into sequential prompts that build on each other. The technique that separates casual AI use from systematic AI workflows.