In the classic Star Trek episode "The Return of the Archons," the crew of the Enterprise encounters a society controlled by Landru—an ancient leader who created an all-powerful computer system to govern his civilization. When faced with any disruption, citizens would desperately cry "Landru! Save us!"—ironically appealing to the very system that had stripped them of their autonomy. It's a fitting parallel for our increasing dependence on AI systems to manage the complexity we've created.

Like the citizens of Beta III who surrendered their judgment to Landru's computer, we're rapidly approaching a similar inflection point with enterprise content and Agentic AI. But there's a fundamental principle at work here that goes deeper than science fiction.

Human beings, like all mammals and most living organisms, exhibit one distinctive trait—bilateral symmetry. You know—2 eyes, 2 ears, 2 arms, 2 legs, and so on.

Consider the human brain with its 2 distinct hemispheres. Both are necessary for our survival. While the left hemisphere manages sequences of actions, the right provides the essential context for evaluating and prioritizing what comes next. One is pretty useless without the other.

A similar observation applies to enterprise content in the age of Agentic AI. Consider how an AI agent processes a document—is examining the text alone sufficient? Not likely. The agent needs metadata, relationships, and organizational context to determine appropriate actions, whether that's routing for approval, triggering workflows, or ensuring compliance.

Is the document part of an active project with specific governance requirements? Does it contain sensitive data requiring special handling? Was it generated by automated processes or human experts? How does it relate to other content in the knowledge graph?

This contextual analysis is like having a separate hemisphere—constantly associating content with other objects, processes, and patterns across the enterprise ecosystem. Agentic AI systems employ semantic analysis, graph databases, and behavioral learning to construct this context.

It's not enough for an AI agent to simply classify or process content—it must factor in the dynamic context defined by related documents, active workflows, compliance states, and organizational knowledge. The same holds true for content lifecycle management, security classification, and knowledge extraction.

Enterprise content management in the agentic era isn't just about storage and retrieval—we need that contextual intelligence to enable autonomous decision-making and action. Like living organisms, our digital systems cannot survive without this bilateral capability.

In our newly crafted 'agentic universe', do we have what it takes to survive? I'm starting to wonder if the exponential growth of autonomous agents, coupled with the ever-increasing complexity needed just to answer the question "What should this content trigger next?", might ultimately make human oversight a moot point.

Perhaps we're already building our own Landru—one API call at a time.

Landru! Save us!