Disaster Recovery Journal Spring 2026

The greatest risk is not malicious intent, but blind trust in models which no longer reflect reality. “

AI Is Entering the Enterprise Through Many Doors Unlike previous technology waves, AI does not enter the enterprise through a single deployment model. There is no single “AI perimeter” to secure. Instead, AI shows up through a wide variety of channels, each with different implications for visibility and control. Browser-based AI tools are among the most common entry points into AI. Employees use them for drafting content, analyzing documents, generating ideas, or troubleshooting technical problems. These interactions often occur over standard HTTPS connections and appear indis tinguishable from normal web browsing. From a security perspective, this traffic looks benign, even when sensitive busi ness data is being shared. Beyond browsers, AI agents are increasingly embedded into workflows. These agents may operate within develop ment pipelines, customer support systems, or internal automation frameworks. Some run continuously, performing tasks in the background without direct user interac tion. Others are triggered by events, data changes, or API calls, acting indepen dently once configured. In parallel, organizations are integrat ing AI through plugins, APIs, and embed ded features within SaaS applications. Many of these integrations are enabled with minimal friction, often requiring only an API key or OAuth permission. In some cases, teams experiment with locally hosted models or custom-built agents designed for specific business needs. Each of these adoption paths intro duces new traffic patterns, trust relation ships, and data flows. Collectively, they create an environment where AI activity is everywhere, yet rarely centralized or stan dardized. Why These AI Data Flows Are Hard to See One of the most significant challenges introduced by enterprise AI adoption is the erosion of visibility. Traditional security tools rely on clear signals: users authen ticate, applications connect to known

destinations, and policies are enforced at well-defined points. AI disrupts this model. AI-driven interactions often blend seamlessly into existing traffic. A request to an AI service may look no different from a routine API call or web request. Responses may contain synthesized data that obscures the original inputs, making it harder to track what information was shared or transformed. AI agents further complicate matters by operating outside the bounds of traditional user sessions. An agent may act on behalf of a user long after that user has logged out, or it may operate under a service iden tity which lacks meaningful context. In many cases, security teams cannot easily attribute actions to intent, making it dif ficult to distinguish legitimate automation from risky behavior. These blind spots create a dangerous asymmetry. While organizations believe they are maintaining control, sensitive data may be flowing to external services, third-party models, or unvetted endpoints without clear oversight. Without consis tent visibility, security teams are left react ing to incidents rather than proactively managing risk. The Limits of Traditional, Centralized Security Controls Historically, enterprises have relied on centralized security controls to enforce policy. Proxies, gateways, and firewalls were designed to inspect traffic as it passed through known chokepoints. This model worked well when applications and users followed predictable network paths. AI breaks this assumption. Modern applications and AI agents often com municate directly with cloud services, bypassing traditional gateways entirely. Endpoints may establish outbound connec tions from anywhere, at any time, based on dynamic logic which is difficult to antici pate. Attempting to force all AI-related traffic back through centralized inspection points introduces latency, degrades perfor mance, and frustrates users. In response, organizations frequently make exceptions. Certain applications are

bypassed, specific destinations are white listed, or inspection is relaxed to preserve productivity. Over time, these excep tions accumulate, weakening the overall security posture and creating inconsistent enforcement. The fundamental issue is architectural. Centralized controls struggle to scale in an environment where work is distributed, automated, and increasingly driven by AI. Security must adapt to follow activity, not force activity to conform to outdated enforcement models. Policy Fragmentation Creates New Security Gaps As awareness of AI-related risk grows, many organizations attempt to address it by adding new tools or policies specifi cally for AI usage. Data loss prevention rules are updated, AI-specific governance platforms are introduced, and additional monitoring solutions are layered on top of existing infrastructure. While well-intentioned, this approach often leads to policy fragmentation. One set of rules governs user access, another controls application behavior, and yet another attempts to manage AI interac tions. These policies are often defined in different tools, using different abstrac tions, and enforced at different points in

34 DISASTER RECOVERY JOURNAL | SPRING 2026

Made with FlippingBook Ebook Creator