"Cyber Security is essential to unlock the full value of Generative AI in business environments"
The adoption of generative artificial intelligence in business environments brings significant challenges in terms of responsibility, security, and governance. Cyber Security is a key pillar to protect sensitive data, secure models, and ensure compliance with current regulations, building trust and safeguarding the integrity of AI solutions. We spoke about all of this with David Prieto, Head of Identity and AI Security, and Elena Zapatero, Business Development Manager.
______
What are the main challenges companies face when adopting AI solutions?
The first challenge is clearly identifying who takes responsibility for AI initiatives within the organisation and ensuring these are developed in line with defined security standards. This first step alone already requires coordinated involvement from multiple teams.
Once responsibility is established, the organisation must then tackle further challenges, both from a technical perspective and in terms of governance.
Some of the most pressing include lack of visibility over the AI applications in use, difficulty protecting sensitive data during model training or RAG processes, exposure to vulnerabilities, and limited readiness against emerging threats like prompt injection or model jailbreaking.
What’s more, adopting AI also means redesigning security architectures, ensuring compliance (GDPR, PCI-DSS, NIST AI), and coordinating various technological and human stakeholders within the organisation.
Why is security so important when implementing generative AI in a corporate setting?
Security is essential in generative AI because this technology not only has the potential to amplify existing threats, but also to introduce new forms of attack, manipulation and data leakage that directly affect the trust, integrity and business continuity of organisations.
Organisations typically adopt these technologies in two main ways: by consuming AI applications via web or SaaS services, or by building and deploying their own solutions on cloud infrastructure, or even on-premise. Each approach carries its own specific threats, which must be addressed through targeted protection strategies, closely aligned with the principles of the shared responsibility model.
In this context, the regulatory dimension becomes critically important. A clear example is the European Union’s Artificial Intelligence Act (AI Act), which represents the first comprehensive legislation on AI worldwide.
In any case, poorly protected models can leak sensitive information, be manipulated to generate inappropriate content, or suffer attacks that compromise their integrity—bearing in mind that we’re talking about applications that are, in some cases, becoming critical to the business.
What specific security measures should be implemented to protect sensitive data used in training generative AI models?
Protecting sensitive data in the context of generative AI requires a tailored approach depending on how the data is used: whether it’s used to train proprietary models or exposed through pre-trained models that access internal sources (such as via RAG).
In both cases, it’s crucial to restrict access to sensitive data using identity protection mechanisms such as multi-factor authentication (MFA), role-based access control, conditional access policies, and advanced ID Protection and Governance solutions. These measures ensure that only authorised data is accessed, reducing the risk of inappropriate access.
Security in collaborative environments and proper classification and protection of information are particularly critical in models connected to internal sources (RAG), where real-time access to shared documents requires proper information tagging to enable Data Loss Prevention (DLP) technologies.
In all scenarios, traceability and auditing must be ensured to allow rapid response to incidents.
Whether your organisation is exploring generative AI models connected to internal data or training proprietary models from scratch, Cyber Security and data protection are not optional. They are part of the responsible design of any AI-based solution. Investing in a secure and traceable architecture not only protects your data—it also safeguards trust in your outcomes.

How does the high-level architecture of a generative AI application affect protection measures?
The high-level architecture of a generative AI application is made up of three main layers: model training (both base and fine-tuning), the LLM runtime, and the application layer. Each of these layers requires specific controls to ensure secure operation, in line with the risks inherent in corporate environments.
- Training data control. Protecting sensitive data used in training requires strong data governance, access control, and tools like DLP, classification, and auditing to prevent exposure or misuse.
- LLM runtime security. Securing the runtime—which processes prompts and generates responses—requires robust protection based on three pillars: infrastructure controls such as microsegmentation and CNAPP solutions (CSPM, CWPP, CIEM) for cloud or hybrid environments; continuous AI security posture management (AI-SPM) to monitor, detect deviations, and apply proactive corrective actions; and targeted offensive testing for generative AI, including prompt injection, data extraction or response manipulation tests to validate model resilience against advanced attacks.
- Application layer security. At the user interaction level, continuous evaluation through generative AI security capabilities (offensive security) and the deployment of solutions like WAD help reinforce API, plugin and extension security and resilience.
What are the main risks associated with implementing generative AI, and how can they be mitigated?
As we've seen, despite its many benefits, the emergence of new threats linked to generative AI inevitably brings new risks that organisations must identify and manage. These include data loss caused by unauthorised tools or user unawareness, unauthorised access to sensitive information, model manipulation via prompt injection or jailbreaks, use of infrastructure, models or applications without security guarantees, and model poisoning, which directly compromises the model’s integrity, reliability and ability to deliver valid responses.
On the regulatory front, the European Artificial Intelligence Act (AI Act) sets out a new legal framework based on a risk management approach that classifies AI systems into four levels: unacceptable risks —explicitly prohibited, such as social scoring or cognitive manipulation—, high risks —those potentially affecting fundamental rights—, limited risks, and minimal risks.
It is therefore essential to map the risks affecting the organisation and develop a comprehensive security strategy to manage them effectively. This strategy should combine identification, protection, and response capabilities, enabling proactive management of threats related to generative AI.
How can generative AI be secured in a corporate environment through 360° identification, protection, and response?
Securing generative AI in a corporate setting requires implementing a comprehensive security framework based on three phases: Identification, Protection and 360° Response, with specific controls across four critical areas: infrastructure, AI model, data/identity, and applications.
- During the Identification phase, infrastructure assessments are performed —including microsegmentation analysis, vendor risk management (VRM), and AI security posture (AI-SPM)— along with application evaluations to identify vulnerabilities in both technical environments and AI solutions. These efforts are complemented by identity and data audits to detect unauthorised access, misconfigurations, and use of unapproved AI tools (Shadow AI).
- The Protection phase applies layered controls: microsegmentation and CNAPP for infrastructure; data labelling, DLP and advanced IAM to protect identities and sensitive information; AI Gateway and security policies for LLM models; and WAD and sharing controls to secure user applications.
- Finally, the 360° Response phase relies on end-to-end visibility, observability, and an integrated AI SOC, enabling swift detection and response to threats or incidents. This strategy ensures full-cycle protection across the generative AI environment.

What role do audits and risk management play in securing generative AI?
Audits and risk management play a key role in securing generative AI—not just from a technical or regulatory standpoint, but also as critical elements in protecting business value and continuity.
Risk management helps identify, assess and prioritise the specific threats introduced by generative AI and their potential impact on critical processes, corporate reputation, compliance or intellectual property. This business impact-oriented perspective is essential for adopting proportional and effective mitigation measures.
Security audits, on the other hand, are essential for verifying that AI systems meet security control requirements. At Telefónica Tech, we’ve developed our own methodology structured into six phases:
- Reconnaissance and enumeration of attack surfaces and exposed services.
- API security analysis, assessing authentication, authorisation and protection against abuse.
- AI model assessment, identifying vulnerabilities such as prompt injection or response manipulation.
- Cloud infrastructure analysis, detecting misconfigurations or pipeline weaknesses.
- Data exfiltration simulation, testing for potential leaks due to poor design or misuse.
- Resilience assessment, measuring the system’s ability to detect, resist, and recover from attacks.
This methodology provides a comprehensive view of the security landscape, helping organisations anticipate risks and proactively strengthen their defences.
What security benefits have companies seen from adopting secure generative AI solutions with Telefónica Tech?
Organisations have achieved end-to-end visibility across both their AI environments and data, which has been key to reducing attack surface, strengthening governance, and improving compliance.
Implementing solutions such as AI-SPM, AI Gateway, WAD, and VRM has enabled the protection of critical applications, early vulnerability detection, and agile, effective incident response. These capabilities have been instrumental in deploying generative AI environments in a more secure, resilient, and regulation-aligned way.
And when it comes to user productivity environments, the improvement has been substantial—avoiding oversharing or unauthorised access to sensitive data.