Shadow AI and agentic AI: risks, signs of exposure and how to regain control
The adoption of AI in the enterprise has moved from being a trend to becoming an operational reality. We are no longer talking about isolated pilots or initiatives tested in controlled environments. AI is now embedded in business processes, connected to critical enterprise data and used in decisions that directly affect operations.
However, depending on each organisation’s level of maturity, this progress is uneven. Some have developed governance and security frameworks, while in others growth has been more organic, driven by the need to improve efficiency. It is in this second context that the phenomenon of shadow AI emerges: employees using external AI tools without oversight, creating a risk of information leakage, loss of control or regulatory non-compliance. It is a serious issue, but one that can be managed within security frameworks.
The landscape is now adding a new layer of complexity: agentic AI. Systems capable of executing tasks, interacting with applications, accessing corporate data and operating with a degree of autonomy. This evolution represents a shift in how technology is integrated into processes.
It is in this move towards autonomous systems that a new risk model emerges. AI stops being an advisory tool and becomes an active element in business operations, directly affecting decisions, access and task execution.
This difference matters because, until now, the main risk lay in what AI produced: incorrect responses, bias and information leakage. Critical, but still contained. Now we are dealing with models that access, connect and execute.
■ This means that the phenomenon of shadow AI is no longer sufficient as an explanatory framework. The issue is no longer just who is using AI, but how it is integrated into operations: what AI can do on its own, under which identity, with which permissions, and across which data and systems.
With agentic AI organisations are taking on new risks that, in many cases, they have not yet identified.
How this change is showing up in companies
This shift from AI that responds to AI that acts often happens outside the usual oversight channels.
In enterprise environments, we are already seeing automations that connect tools. For example, an assistant that not only drafts an email but also accesses the CRM, retrieves customer data and suggests an action: a workflow that crosses information from different sources without going through formal validation. Although these are useful developments, they introduce dependencies and access routes that are not always clearly defined.
This is also reflected in how these solutions are built. It is becoming increasingly common for business users to build their own workflows or applications using AI platforms, without going through the usual audit, development, security or compliance channels. It is not traditional development, but nor is it passive use. It is an intermediate space in which the term shadow developers is beginning to emerge.
In these scenarios, security is not designed in from the outset. Integrations are built quickly, permissions are granted broadly, and the operating logic is spread across prompts, tools and configurations that are difficult to audit. What improves productivity can become a vulnerability.
■ None of this is driven by malicious intent, but by the need to move faster. Precisely because of this lack of control over the integration, risk starts to become entrenched.
What changes with agentic AI?
Agentic AI changes the nature of risk. Until now, AI-related issues were largely confined to the interaction itself. Although the consequences could be significant, the risk remained contained within that interaction.
Now, integration into real processes implies continuous connection to systems and data. Risk is no longer occasional; it becomes inherent to the process. The context is no longer isolated information entered in a prompt, but a constant flow between tools, models and internal sources, often without adequate boundaries.
If that flow is not properly managed, exposure becomes part of the process itself.
Something similar happens with access. Agents need permissions to operate: they access applications, retrieve data and execute actions. But in many cases those permissions are granted for convenience rather than according to least-privilege principles, creating risks that are difficult to detect.
The nature of actions also changes. Previously, an incorrect output might result in a poor human decision. Now it can translate directly into an executed action, reducing the margin for containment.
Then there is traceability. When multiple systems are involved, reconstructing what happened is no longer straightforward. Without that capability, auditing or explaining behaviour becomes difficult.
In this context, threats such as prompt manipulation or source contamination affect not only the model, but the wider system as a whole.
With agentic AI risk stops being something observable from the outside and becomes embedded in operations.
How to identify a lack of control over AI
What matters now is no longer what could happen, but what is already happening, often without sufficient visibility.
The use of tools outside the corporate environment continues to grow, alongside open integrations with SaaS applications or internal repositories. Even when they work well, they expand the exposure surface.
In many cases there is no complete inventory of models, agents or automations. It is known that AI is present, but not where or to what extent, which makes traceability more difficult.
Excessive permissions are also common: too many open APIs, broad access rights and uncontrolled integrations.
And then there is the data: sensitive information that is unclassified, overshared or not protected by adequate controls for AI environments.
When lack of control is continuous, risk stops being isolated and becomes systemic.
What needs to be defined before scaling AI and agents
Before scaling the use of AI and agents, there is a minimum foundation that is not optional. This is not about limiting adoption, but about preventing uncontrolled growth from the outset.
- Scope. Understand what AI usage actually exists across the organisation, both inside and outside corporate environments. Without that inventory, every decision starts from an incomplete view.
- Defining acceptable use. Not every use case carries the same level of risk and they should not all be treated in the same way. Defining which uses are acceptable, under what conditions and with which limitations prevents adoption from moving faster than the ability to govern it.
- Defining responsibilities. Who can deploy, who validates and who supervises. When these responsibilities are unclear, control becomes diluted.
- Data governance. What information can be used, in which contexts and under which restrictions. Without this criterion, exposure cannot be properly bounded.
- Human oversight, especially in sensitive processes. The concept of human-in-the-loop is not a brake, but a control mechanism when there is an impact on customers, the business or compliance.
- Regulatory framework. Regulations such as the AI Act, NIS2 and DORA are already defining requirements for traceability, accountability and control.
Without these elements, what looks like rapid AI adoption, is actually a disorderly expansion of risk.
Operational controls: security, identity, data and compliance
Security
From a security perspective, the focus is on how models interact with their environment. Without protection against attacks such as prompt injection or information extraction, any interaction point can become a path for manipulation or leakage.
Protecting systems is not enough: it is necessary to protect the flows.
Identity
In identity, the shift is profound. It is no longer only about users, but about agents that operate with verifiable credentials, access systems and execute actions.
Without clear governance of non-human identities and without applying the principle of least privilege, risk grows cumulatively and becomes difficult to detect.
Data
In the data domain, exposure is continuous. Models need context, and that context often includes sensitive information.
Without classification, encryption, data loss prevention and protection of enterprise data, that flow becomes a permanent risk vector.
Compliance
In compliance the key is evidence. It is not enough to comply: organisations must be able to demonstrate what happened, how and why.
The basic requirements are traceability, auditability and the ability to explain each interaction. Without that control, any incident quickly becomes more serious.
These controls do not work in isolation: it is their combination that determines the level of protection.
Visibility, detection and response: the new operational standard
When AI enters operational processes, control depends on the ability to see, detect and act in real time.
Visibility
Without visibility, it is impossible to know what is happening: which agents are active, which data they use and which actions they execute. Monitoring is a basic condition of governance.
This visibility must be comprehensive: models, data, identities and integrations. Risks do not appear at a single point, but in the interaction between them.
As these systems become integrated into operations, this visibility evolves towards observability, capable of providing context and understanding of what is happening.
Detection
From that point on, detection makes it possible to identify anomalies before they become incidents: unexpected access, anomalous behaviour or deviations in usage.
Response
The ability to respond within seconds in order to contain incidents and automate actions is where traditional models fall short. This is where integration with specialised SOCs becomes meaningful, especially in environments where AI interacts with critical systems.
In addition, every incident is an opportunity to adjust controls and strengthen the system. AI security is a continuous process.
Effective management requires technology, methodologies and experience in order to analyse and act with sound judgement.
How to start regaining control
Organisations do not need to wait for risk to become explicit. In many cases it already exists and can be addressed proactively.
- Understand the real exposure, beyond shadow AI: identify agents, automations and integrations already in use.
- Identify non-human identities and their permissions. What once seemed secondary is now essential.
- Review access and define limits, applying least privilege and reducing unnecessary integrations.
- Define corporate policies: what can be done, what cannot and under which conditions.
- Establish continuous monitoring in order to understand in real time what is happening.
- Strengthen compliance, ensuring traceability, control and the ability to respond.
—This is not a long-term solution in itself, but it is a starting point for managing AI in a controlled way.
When AI enters operations the way organisations need to adopt it must change.
Conclusion
When AI is integrated into operations, the challenge is to do so with control: understanding the risks, protecting data and identities, and operating with continuous visibility wherever AI acts.
At Telefónica Tech, we support organisations throughout their Secure Journey to AI, helping them to identify risks from the earliest stages, define controls and establish a governance and continuous response model aligned with their operational and compliance needs.
Hybrid Cloud
Cybersecurity
AI & Data
IoT & Connectivity
Business Applications
Intelligent Workplace
Consulting & Professional Services
Small Medium Enterprise
Health and Social Care
Industry
Retail
Tourism and Leisure
Transport & Logistics
Energy & Utilities
Banking and Finance
Smart Cities
Public Sector