Contact Me Agentic AI Two Algorithms, Zero Shared Memory When prior authorization meets retrospective denial, the real failure isn't procedural — it's semantic. By John F. Holliday • 11 min read AI-Assisted Engineering The Context Trap: How Claude Code's Session Memory Can Narrow Your Solution Space The longer and more technically dense a coding session becomes, the more the model's attention and generative probability distributions get pulled toward patterns, idioms, and architectural choices already present in that context. By John F. Holliday • 8 min read Agentic AI Semantic Dissonance: The Silent Failure Mode of Multi-Agent AI Systems When agents share a channel but not a contract, coherence collapses without warning — and domain-specific languages may be the only reliable remedy. By John F. Holliday • 11 min read Agentic AI When Your AI Becomes Chief of Staff: A STRIDE Threat Analysis of the Lobster/OpenClaw Personal AI Agent System I constructed a complete data flow diagram with five trust boundaries and ran a systematic STRIDE-per-element analysis. The result: 46 enumerated threats — 11 Critical, 17 High, 14 Medium, and 4 Low. Seven have no mitigation at all. By John F. Holliday • 8 min read Agentic AI When Your AI Agent Becomes the Attack Surface: The OpenClaw Security Crisis and What It Means for All of Us The fastest-growing open-source project in GitHub history is also 2026's most significant cybersecurity incident. Here's what happened, why it matters, and how to protect yourself. By John F. Holliday • 13 min read Agentic AI Most AI Agents Are Just Fancy Prompt Wrappers. I Built One That Actually Understands Its Own Output Grammar-validated AI generation with language server infrastructure: AI systems that reason about structured domains with the same rigor as a compiler. By John F. Holliday • 5 min read
Agentic AI Two Algorithms, Zero Shared Memory When prior authorization meets retrospective denial, the real failure isn't procedural — it's semantic. By John F. Holliday • 11 min read
AI-Assisted Engineering The Context Trap: How Claude Code's Session Memory Can Narrow Your Solution Space The longer and more technically dense a coding session becomes, the more the model's attention and generative probability distributions get pulled toward patterns, idioms, and architectural choices already present in that context. By John F. Holliday • 8 min read
Agentic AI Semantic Dissonance: The Silent Failure Mode of Multi-Agent AI Systems When agents share a channel but not a contract, coherence collapses without warning — and domain-specific languages may be the only reliable remedy. By John F. Holliday • 11 min read
Agentic AI When Your AI Becomes Chief of Staff: A STRIDE Threat Analysis of the Lobster/OpenClaw Personal AI Agent System I constructed a complete data flow diagram with five trust boundaries and ran a systematic STRIDE-per-element analysis. The result: 46 enumerated threats — 11 Critical, 17 High, 14 Medium, and 4 Low. Seven have no mitigation at all. By John F. Holliday • 8 min read
Agentic AI When Your AI Agent Becomes the Attack Surface: The OpenClaw Security Crisis and What It Means for All of Us The fastest-growing open-source project in GitHub history is also 2026's most significant cybersecurity incident. Here's what happened, why it matters, and how to protect yourself. By John F. Holliday • 13 min read
Agentic AI Most AI Agents Are Just Fancy Prompt Wrappers. I Built One That Actually Understands Its Own Output Grammar-validated AI generation with language server infrastructure: AI systems that reason about structured domains with the same rigor as a compiler. By John F. Holliday • 5 min read