Agentic AI When Your AI Becomes Chief of Staff: A STRIDE Threat Analysis of the Lobster/OpenClaw Personal AI Agent System I constructed a complete data flow diagram with five trust boundaries and ran a systematic STRIDE-per-element analysis. The result: 46 enumerated threats — 11 Critical, 17 High, 14 Medium, and 4 Low. Seven have no mitigation at all. By John F. Holliday • 8 min read
Agentic AI When Your AI Agent Becomes the Attack Surface: The OpenClaw Security Crisis and What It Means for All of Us The fastest-growing open-source project in GitHub history is also 2026's most significant cybersecurity incident. Here's what happened, why it matters, and how to protect yourself. By John F. Holliday • 13 min read
Agentic AI Most AI Agents Are Just Fancy Prompt Wrappers. I Built One That Actually Understands Its Own Output Grammar-validated AI generation with language server infrastructure: AI systems that reason about structured domains with the same rigor as a compiler. By John F. Holliday • 5 min read
Two Robots Examining a Document for Compliance Agentic AI Instinct vs. Deliberation: How Anthropic and OpenAI Train Their Models to Follow the Rules — And Why It Matters for Enterprise AI The most consequential technical distinction in enterprise AI isn't about which model is smarter. It's about how each model was taught to be safe — and what that means when you deploy agents that make real-world decisions. By John F. Holliday • 12 min read
MCP Contract Enforcement Agentic AI MCP Needs a Type System, Part 2: Building the Contract Layer Tool descriptions suggest constraints to LLMs, but suggestions aren't guarantees. So what would formal MCP contracts actually look like? By John F. Holliday • 7 min read
AI Agents Agentic AI MCP Needs a Type System, Part 1: Six Incidents That Expose the Protocol's Blind Spot Your AI agents are only as safe as the contracts they honor—and right now, MCP doesn't have any. By John F. Holliday • 6 min read
Agentic AI Deontic Logic for Agent Permissions: A Formal Framework for AI Agent Governance The fundamental question of what agents are permitted to do remains governed by ad-hoc JSON schemas and vibes-based access control. Wesley Hohfeld's decomposition of rights into eight fundamental relations—and deontic logic's formal operators—provide exactly the rigor AI agent governance needs. By John F. Holliday • 8 min read
Stylized Periodic Table in Digital Space Agentic AI Extending the AI Periodic Table: Two Missing Elements for the Semantic AI Era Just as you wouldn't build a house without blueprints, you shouldn't build AI guardrails without a DSL that precisely defines acceptable behavior. By John F. Holliday • 8 min read
The dynamic interplay between manifest order and latent potential Machine Intelligence Attention is Not All We Need: The Case for Meaning By Design The title of the original Transformer paper was elegant marketing. It was also philosophically careless. Attention is a mechanism, and mechanisms don't yield meaning. You can't get semantics from syntax alone, no matter how much compute you throw at the problem. By John F. Holliday • 7 min read
An Ever-Present Reservoir of Infinite Possibilities Agentic AI Everything Everywhere All at Once: How Transformers Changed the Way Machines Understand Language In 2017, Google researchers proposed throwing out sequential language processing entirely. Their Transformer architecture could see everything, everywhere, all at once—and it changed AI forever. By John F. Holliday • 8 min read
Generative AI: Revolutionizing Information Governance Information Governance Generative AI and Information Governance: A Practitioner's Guide The information governance landscape is undergoing its most significant transformation since the advent of cloud computing. Generative AI—the same technology powering tools like ChatGPT and Claude—is rapidly moving from novelty to necessity in how organizations manage, classify, and protect their data assets. For those of us who have By John F. Holliday • 12 min read
A humanoid robot hand carefully drawing lines Agentic AI Teaching AI Agents to Color Inside the Lines The challenge isn't that AI is stupid—it's that AI is like an overconfident intern who read the handbook once. The solution? Give AI a menu of approved actions instead of hoping it figures out your business. Structured semantic AI agents deliver predictability, compliance, and trust at scale. By John F. Holliday • 3 min read