Shadow AI: how unsupervised AI tools are challenging enterprise security

November 6, 2025

Have you ever wondered how many AI tools are being used in your company without supervision?

The rapid adoption of generative Artificial Intelligence has ushered in a new era of efficiency and creativity in the workplace by democratising the use of AI. However, it has also given rise to a growing risk: the use of AI tools by employees or departments without the company’s approval or oversight.

This practice, known as Shadow AI, has evolved from an isolated occurrence to a global risk for corporate security, privacy, and regulatory compliance.

A growing global phenomenon

Easy access to cloud-based AI tools—and the lack of clear corporate policies in many organisations—has fuelled the rise of Shadow AI.

  • According to the Cloud and Threat Report: Shadow AI and Agentic AI 2025 by Netskope, 89% of companies are using at least one generative AI application, often without formal approval. The same report shows that the number of users interacting with these tools has increased by more than 50% in recent months.
  • The State of AI in Business 2025 report by MIT also reveals that while 40% of the companies surveyed have licensed generative AI solutions, in over 90% of them, employees are using AI tools with personal accounts.
  • Findings from Cisco confirm this lack of oversight. Its 2025 Cybersecurity Readiness Index reveals that 60% of companies are unable to monitor the prompts submitted by employees to generative AI tools, and the same percentage admits they lack the ability to detect the use of unsupervised AI tools in their environments.

Moreover, the amount of data being shared with these tools is significant: companies upload an average of 8.2 GB per month, according to Netskope.

A new kind of business risk

Shadow AI not only amplifies security threats; it also undermines traceability, operational consistency, and trust. Its impact can be seen across several areas:

  • Data leakage and loss of confidentiality. Prompts, documents, or data shared with external tools may be stored or reused beyond the company’s control.
  • Regulatory risk. Regulations such as the General Data Protection Regulation (GDPR) and the European AI Act require transparency and oversight of AI systems. Unauthorised use can lead to fines and reputational damage.
  • Bias and degraded quality. Public, non-audited tools may produce inaccurate, biased, or discriminatory outcomes that affect business decisions.
  • Technological fragmentation. The uncontrolled spread of applications hinders IT management, support, and the security of digital workplaces.
  • Lack of traceability. Without logs or audit trails, companies cannot track how specific outputs or recommendations were generated.

Unsupervised AI use also increases exposure to more sophisticated attacks, such as model or data poisoning, where training data or system parameters are manipulated to alter outputs; or unauthorised access, which compromises identities and credentials. The result is a loss of digital trust and reputational harm.

From reaction to governance: how to address it

Banning AI use is not an effective strategy. Generative AI tools are already embedded in the workplace and are only becoming more widespread.

The key lies in governing their use. Companies must define frameworks to identify, regulate, and monitor AI adoption in line with their security policies and business goals.

To do so, organisations must take a proactive approach to incident prevention by ensuring their AI implementation strategy becomes both a competitive advantage and a trust-building asset for their business.

Secure Journey to AI: Telefónica Tech’s solution

At Telefónica Tech, we support businesses through this process with our Secure Journey to AI framework: a comprehensive strategy designed to prevent and control the risks of Shadow AI.

This approach is built on three core pillars:

  • Early risk detection. Identifying unauthorised AI usage, model vulnerabilities, and exposure of sensitive data.
  • Protection against threats. Applying advanced Cyber Security measures, access controls, identity management, and data protection throughout the AI lifecycle.
  • 360° response. Continuous monitoring, integration with a specialised AI SOC, and a model of constant improvement to counter emerging threats.

This methodology gives companies full visibility into AI usage, ensures process traceability, and guarantees compliance from day one.

The secure and responsible integration of AI enables companies to mitigate risks while strengthening resilience and digital trust.

Conclusion

Shadow AI highlights the gap between the speed of technological change and companies’ ability to adopt and manage it securely. It reflects employees’ drive to innovate—but also exposes the vulnerabilities and potential consequences of digital transformation without proper governance. The challenge, therefore, is not to restrict AI, but to channel its use in a safe and productive way.

At Telefónica Tech, we help companies integrate AI securely, responsibly, and in line with international regulatory frameworks. Only those that embed AI with strong governance and security will be able to turn trust into a sustainable competitive advantage.

Diary of an AI-augmented employee
Intelligent Workplace
Diary of an AI-augmented employee
May 22, 2025