Artificial intelligence and Cybersecurity: which comes first?
The chicken-and-egg dilemma in the digital age
Artificial intelligence and Cyber Security are now inseparable. When combined, they reinforce each other. However, they also challenge one another.
This relationship raises an interesting question: which came first—and which should come first today: Artificial intelligence or Cybersecurity?
Answering it is not merely a historical exercise. It helps us understand why both disciplines are evolving together and why, in today’s context, neither can move forward without the other.
The real question today is why one cannot advance without the other.
Which came first?
Before analysing what should come first today, it is worth looking back at the origins of each discipline to understand which emerged first and in what context.
If we examine their historical evolution, we see that their origins were close in time but very different in motivation:
Artificial intelligence was born as an academic concept, without an immediate practical application. The term was formally coined in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence. Even earlier, in 1950, Alan Turing had already laid the conceptual foundations of the discipline.
Cybersecurity, on the other hand, emerged as a practical necessity, linked to protecting systems and data. Its origins, then referred to as information security, are closely tied to multi-user systems and the need to safeguard sensitive information. Well-documented examples include early password systems and research funded by DARPA in the 1960s.
For this reason, it is difficult to clearly determine which came first, as the answer depends on whether we take an academic perspective or an operational perspective. For years, both disciplines evolved in parallel: Artificial intelligence sought to learn, while Cybersecurity sought to protect.
Their intersection would come much later.
The meeting point: more data, more volume, more risks
Everything changed with massive digitalisation: more connected devices, more data in circulation, more users, more alerts…
This new scenario produced two simultaneous effects:
- More opportunities to apply Artificial intelligence, driven by the exponential growth in data volume.
- A broader attack surface, and therefore an increase in security risks.
Cybersecurity had to react quickly, as manual methods were no longer sufficient. SIEM alerts began to surge, vulnerabilities became harder to manage, and even low-criticality incidents required analysis and response from Cybersecurity Operations Center (SOC) analysts.
It was in this context that Artificial intelligence began to play a key role in protecting digital environments.
Massive digitalisation multiplied the value of Artificial intelligence while exponentially expanding the attack surface.
When Artificial Intelligence becomes an ally of Cybersecurity
Today, Artificial intelligence is an essential tool for protecting digital systems. Clear examples include:
Anomaly detection
Identification of anomalous behaviour in networks and systems in real time. In this field, we are no longer talking only about automation; we are already seeing significant and tangible advances in autonomous agents that support SOC analysts in triage and investigation tasks.
Faster incident response
Automation of repetitive tasks and reduced response times, support in analysis, playbook development and identification of indicators of compromise (IOC).
Large-scale data analysis
At scale, this would be unfeasible manually. We have seen this for years, but the current ability to provide clearer and more accessible contextual insights continues to evolve.
Organisations such as the European Union Agency for Cybersecurity (ENISA) highlight the value of Artificial intelligence in anticipating threats and strengthening digital defence.
However, the relationship is not only supportive.
When Artificial intelligence introduces new risks
Here lies the real chicken-and-egg dilemma.
The same technology that helps protect can also become an additional source of risk. On the one hand, AI can be used to attack. For example, by generating malicious content or refining social engineering techniques. On the other hand, its rapid adoption across all sectors, impacting data, applications, infrastructures and models, amplifies both its impact and its exposure.
At this point, a clear principle must be established: Cybersecurity needs Artificial intelligence to continue evolving, and it is not viable to deploy Generative Artificial intelligence without Cybersecurity.
Any Generative Artificial intelligence use case within an organisation must consider:
- AI security governance
Identify applicable regulations (AI Act, GDPR, sector-specific regulations…), define policies, and establish clear roles and responsibilities within the organisation. - Risk identification
Analyse the data used to train the model, who has access to it, which applications and models are in use, and the supporting infrastructures, ensuring that we are not introducing new risks into the organisation. - Early vulnerability detection
Across data, identities, infrastructures and models. - Threat protection
Implement robust measures to ensure resilient infrastructures, secure applications and effective access and data control. - Comprehensive response
Continuous monitoring, integration with a specialised SOC and a continuous improvement framework for incident management.
■ Every advance in AI creates new risks, and every new risk drives the development of new defences. It is a continuous cycle that requires integrating security throughout the entire model lifecycle, especially in enterprise environments.
It is not viable to deploy Generative Artificial intelligence without Cybersecurity.
Minimum controls for enterprise LLMs
The adoption of language models in enterprise environments requires a set of basic controls to reduce risks, strengthen governance and build trust. These controls are the starting point for secure and responsible AI, aligned with our structured framework, Secure Journey to AI.
Access control and data isolation
Define who can interact with the model, from which applications and with what information, applying the principle of least privilege and avoiding unnecessary exposure of sensitive data.
Logging and auditing of prompts and outputs
Ensure traceability of model interactions to facilitate anomaly detection, incident investigation and regulatory compliance.
Output filtering and validation
Incorporate mechanisms to reduce the risk of data leakage, inappropriate content generation or responses that may negatively impact reputation or decision-making.
Regular offensive security testing
Subject models to recurring tests against techniques such as prompt injection, jailbreaks or information extraction, validating their resilience against attacks and misuse.
Supplier and supply chain review
Continuously assess third-party models, libraries, datasets and services, identifying risks associated with external dependencies and supply chain attacks.
Retention and responsible use policy
Establish clear rules on how long prompts, outputs and logs are stored and how they may be used, in line with regulatory and privacy requirements.
Human oversight in sensitive decisions
Incorporate human-in-the-loop mechanisms in high-impact scenarios, ensuring that automated decisions remain subject to human control, review and accountability.
These minimum controls do not limit innovation, they make it viable. They enable organisations to scale Artificial intelligence securely, transparently and confidently, integrating protection as a natural part of the model lifecycle.
Every advance in AI creates new risks, and every new risk drives the development of new defences. Security is not a final state, but a continuous cycle.
A mutually reinforcing race
Returning to the initial question, rather than asking which came first, the key issue is this: can there be AI without Cybersecurity or Cybersecurity without AI?
The reality is clear:
- Cybersecurity needs Artificial intelligence to remain effective in the face of today’s complexity.
- Artificial intelligence needs Cyber Security to be reliable, resilient and sustainable over time.
Without proper protection, data can be manipulated, models can be attacked and automated decisions can become dangerous.
That is why international organisations and leading frameworks insist on integrating security from the earliest design stages.
What does this tell us about the future?
Everything points to a clear conclusion:
- There will be no trustworthy Artificial intelligence without Cybersecurity.
- There will be no effective Cybersecurity without Artificial intelligence.
■ The challenge is not choosing one over the other, but designing and evolving them together, responsibly and transparently.
This means:
- Designing secure systems from the outset.
- Training people, not just technologies.
- Establishing clear standards and ethical principles.
AI will only be trustworthy if security, governance and accountability are embedded by design.
Conclusion: a question that matters less and less
Returning to the original question, does Artificial intelligence or Cybersecurity come first?
In practice, it no longer makes sense to separate them.
As with the classic chicken-and-egg dilemma, the answer lies not in their origin but in their joint evolution. The real question now is how to integrate both by design, ensuring that every use case embeds security as a structural element rather than an afterthought.
At Telefónica Tech, this is precisely the approach we follow: a Secure Journey to AI that integrates security, governance and resilience throughout the entire Artificial intelligence lifecycle.
Photo: Freepik.
Hybrid Cloud
Cyber Security & NaaS
AI & Data
IoT & Connectivity
Business Applications
Intelligent Workplace
Consulting & Professional Services
Small Medium Enterprise
Health and Social Care
Industry
Retail
Tourism and Leisure
Transport & Logistics
Energy & Utilities
Banking and Finance
Smart Cities
Public Sector